CycleGAN, Image-to-Image Translation

In this notebook, we're going to define and train a CycleGAN to read in an image from a set $X$ and transform it so that it looks as if it belongs in set $Y$. Specifically, we'll look at a set of images of Yosemite national park taken either during the summer of winter. The seasons are our two domains!

The objective will be to train generators that learn to transform an image from domain $X$ into an image that looks like it came from domain $Y$ (and vice versa).

Some examples of image data in both sets are pictured below.

Unpaired Training Data

These images do not come with labels, but CycleGANs give us a way to learn the mapping between one image domain and another using an unsupervised approach. A CycleGAN is designed for image-to-image translation and it learns from unpaired training data. This means that in order to train a generator to translate images from domain $X$ to domain $Y$, we do not have to have exact correspondences between individual images in those domains. For example, in the paper that introduced CycleGANs, the authors are able to translate between images of horses and zebras, even though there are no images of a zebra in exactly the same position as a horse or with exactly the same background, etc. Thus, CycleGANs enable learning a mapping from one domain $X$ to another domain $Y$ without having to find perfectly-matched, training pairs!

CycleGAN and Notebook Structure

A CycleGAN is made of two types of networks: discriminators, and generators. In this example, the discriminators are responsible for classifying images as real or fake (for both $X$ and $Y$ kinds of images). The generators are responsible for generating convincing, fake images for both kinds of images.

This notebook will detail the steps you should take to define and train such a CycleGAN.

  1. You'll load in the image data using PyTorch's DataLoader class to efficiently read in images from a specified directory.
  2. Then, you'll be tasked with defining the CycleGAN architecture according to provided specifications. You'll define the discriminator and the generator models.
  3. You'll complete the training cycle by calculating the adversarial and cycle consistency losses for the generator and discriminator network and completing a number of training epochs. It's suggested that you enable GPU usage for training.
  4. Finally, you'll evaluate your model by looking at the loss over time and looking at sample, generated images.

Load and Visualize the Data

We'll first load in and visualize the training data, importing the necessary libraries to do so.

If you are working locally, you'll need to download the data as a zip file by clicking here.

It may be named summer2winter-yosemite/ with a dash or an underscore, so take note, extract the data to your home directory and make sure the below image_dir matches. Then you can proceed with the following loading code.

In [1]:
# loading in and transforming data
import os
import torch
from torch.utils.data import DataLoader
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms

# visualizing data
import matplotlib.pyplot as plt
import numpy as np
import warnings

%matplotlib inline

DataLoaders

The get_data_loader function returns training and test DataLoaders that can load data efficiently and in specified batches. The function has the following parameters:

  • image_type: summer or winter, the names of the directories where the X and Y images are stored
  • image_dir: name of the main image directory, which holds all training and test images
  • image_size: resized, square image dimension (all images will be resized to this dim)
  • batch_size: number of images in one batch of data

The test data is strictly for feeding to our generators, later on, so we can visualize some generated samples on fixed, test data.

You can see that this function is also responsible for making sure our images are of the right, square size (128x128x3) and converted into Tensor image types.

It's suggested that you use the default values of these parameters.

Note: If you are trying this code on a different set of data, you may get better results with larger image_size and batch_size parameters. If you change the batch_size, make sure that you create complete batches in the training loop otherwise you may get an error when trying to save sample data.

In [2]:
def get_data_loader(image_type, image_dir='../input/cyclegan/summer2winter_yosemite', 
                    image_size=256, batch_size=16, num_workers=0):
    """Returns training and test data loaders for a given image type, either 'summer' or 'winter'.

    These images will be resized to 256x256x3, by default, converted into Tensors, and normalized.
    """

    # resize and normalize the images
    transform = transforms.Compose([transforms.Resize(image_size), # resize to 128x128
                                    transforms.ToTensor()])

    # get training and test directories
    image_path = './' + image_dir
    train_path = os.path.join(image_path, image_type)
    test_path = os.path.join(image_path, 'test_{}'.format(image_type))

    # define datasets using ImageFolder
    train_dataset = datasets.ImageFolder(train_path, transform)
    test_dataset = datasets.ImageFolder(test_path, transform)

    # create and return DataLoaders
    train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size,
                              shuffle=True, num_workers=num_workers)

    test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size,
                             shuffle=False, num_workers=num_workers)

    return train_loader, test_loader
In [3]:
# create train and test dataloaders for images from the two domains X and Y
# image_type = directory names for our data
dataloader_X, test_dataloader_X = get_data_loader(image_type='summer')
dataloader_Y, test_dataloader_Y = get_data_loader(image_type='winter')

Display some Training Images

Below we provide a function imshow that reshape some given images and converts them to NumPy images so that they can be displayed by plt. This cell should display a grid that contains a batch of image data from set $X$.

In [4]:
# helper imshow function
def imshow(img):
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))

# get some images from X
dataiter = iter(dataloader_X)

# the "_" is a placeholder for no labels
images, _ = dataiter.next()

# show images
fig = plt.figure(figsize=(12, 8))
imshow(torchvision.utils.make_grid(images))

Next, let's visualize a batch of images from set $Y$.

In [5]:
# get some images from Y
dataiter = iter(dataloader_Y)
images, _ = dataiter.next()

# show images
fig = plt.figure(figsize=(12,8))
imshow(torchvision.utils.make_grid(images))

Pre-processing: scaling from -1 to 1

We need to do a bit of pre-processing; we know that the output of our tanh activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)

In [6]:
# current range
img = images[0]

print('Min: ', img.min())
print('Max: ', img.max())
Min:  tensor(0.)
Max:  tensor(1.)
In [7]:
# helper scale function
def scale(x, feature_range=(-1, 1)):
    """Scale takes in an image x and returns that image,

    scaled with a feature_range of pixel values from -1 to 1. 
    This function assumes that the input x is already scaled from 0-1.
    """

    # scale from 0-1 to feature_range
    min, max = feature_range
    return x * (max - min) + min
In [8]:
# scaled range
scaled_img = scale(img)

print('Scaled min: ', scaled_img.min())
print('Scaled max: ', scaled_img.max())
Scaled min:  tensor(-1.)
Scaled max:  tensor(1.)

Define the Model

A CycleGAN is made of two discriminator and two generator networks.

Discriminators

The discriminators, $D_X$ and $D_Y$, in this CycleGAN are convolutional neural networks that see an image and attempt to classify it as real or fake. In this case, real is indicated by an output close to 1 and fake as close to 0. The discriminators have the following architecture:

This network sees a 128x128x3 image, and passes it through 5 convolutional layers that downsample the image by a factor of 2. The first four convolutional layers have a BatchNorm and ReLu activation function applied to their output, and the last acts as a classification layer that outputs a prediction map with depth of one. Contrary to what the figure above indicates, the final output is not required to have a width and depth of one. In the original paper, the authors passed a 4x4 kernel with stride of 1 in the final convolutional layer. You should replicate that strategy.

Convolutional Helper Function

To define the discriminators, you're expected to use the provided conv function, which creates a convolutional layer + an optional batch norm layer.

In [9]:
import torch.nn as nn

class Identity(nn.Module):
    def forward(self, x):
        return x
In [10]:
import functools

def get_norm_layer(norm_type='batch'):
    """Returns a normalization layer.

    Parameters:
        norm_type (str)  -- name of normalization layer: batch | instance | none

    For BatchNorm, use learnable affine parameters and track running statistics (mean/stddev).
    For InstanceNorm, do not use learnable affine parameters and do not track running statistics.
    """

    if norm_type == 'batch':
        norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True)
    elif norm_type == 'instance':
        norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False)
    elif norm_type == 'none':
        def norm_layer(x): return Identity()
    else:
        raise NotImplementedError('normalization layer [%s] is not found' % norm_type)

    return norm_layer
In [11]:
# helper conv function
def conv(input_nc, output_nc, kernel_size, norm_layer, bias, stride=2, padding=1):
    """Creates a convolutional layer, with an optional normalization layer.
    """

    layers = [nn.Conv2d(input_nc, output_nc, kernel_size, stride, padding, bias=bias)]

    if norm_layer != None:
        layers += [norm_layer(out_channels)]

    return nn.Sequential(*layers)

Define the Discriminator Architecture

Your task is to fill in the __init__ function with the specified 5 layer conv net architecture. Both $D_X$ and $D_Y$ have the same architecture, so we only need to define one class, and later instantiate two discriminators.

It's recommended that you use a kernel size of 4x4 and use that to determine the correct stride and padding size for each layer. This Stanford resource may also help in determining stride and padding sizes.

  • Define your convolutional layers in __init__
  • Then fill in the forward behavior of the network

The forward function defines how an input image moves through the discriminator, and the most important thing is to pass it through your convolutional layers in order, with a ReLu activation function applied to all but the last layer.

You should not apply a sigmoid activation function to the output, here, and that is because we are planning on using a squared error loss for training. And you can read more about this loss function, later in the notebook.

In [12]:
class Discriminator(nn.Module):
    """Defines a PatchGAN discriminator.
    """

    def __init__(self, in_channels, n_filters=64, n_layers=3, norm_layer=get_norm_layer('batch')):
        """Constructs a PatchGAN discriminator.

        Parameters:
            in_channels (int)  -- number of channels in input images
            n_filters (int)    -- number of filters in the last conv layer
            n_layers (int)     -- number of conv layers in discriminator
            norm_layer         -- normalization layer
        """

        super(Discriminator, self).__init__()

        # define all convolutional layers
        # should accept an RGB image as input and output a single value

        # no need to use bias as BatchNorm2d has affine parameters
        if type(norm_layer) == functools.partial:
            use_bias = norm_layer.func == nn.InstanceNorm2d
        else:
            use_bias = norm_layer == nn.InstanceNorm2d

        # convolutional layers, increasing in depth
        # first layer has *no* batchnorm
        model = [nn.Conv2d(in_channels, n_filters, kernel_size=4, stride=2, padding=1),
                 nn.LeakyReLU(0.2, True)]
        mult = 1
        prev_mult = 1

        for n in range(1, n_layers):  # gradually increase the number of filters
            prev_mult = mult
            mult = min(2 ** n, 8)

            model += [nn.Conv2d(n_filters * prev_mult, n_filters * mult, kernel_size=4,
                                stride=2, padding=1, bias=use_bias),
                      norm_layer(n_filters * mult),
                      nn.LeakyReLU(0.2, True)]

        prev_mult = mult
        mult = min(2 ** n_layers, 8)

        model += [nn.Conv2d(n_filters * prev_mult, n_filters * mult, kernel_size=4,
                            stride=1, padding=1, bias=use_bias),
                  norm_layer(n_filters * mult),
                  nn.LeakyReLU(0.2, True)]

        # classification layer
        model += [nn.Conv2d(n_filters * mult, 1, kernel_size=4, stride=1, padding=1)]
        self.model = nn.Sequential(*model)


    def forward(self, input):
        return self.model(input)

Generators

The generators, G_XtoY and G_YtoX (sometimes called F), are made of an encoder, a conv net that is responsible for turning an image into a smaller feature representation, and a decoder, a transpose_conv net that is responsible for turning that representation into an transformed image. These generators, one from XtoY and one from YtoX, have the following architecture:

This network sees a 128x128x3 image, compresses it into a feature representation as it goes through three convolutional layers and reaches a series of residual blocks. It goes through a few (typically 6 or more) of these residual blocks, then it goes through three transpose convolutional layers (sometimes called de-conv layers) which upsample the output of the resnet blocks and create a new image!

Note that most of the convolutional and transpose-convolutional layers have BatchNorm and ReLu functions applied to their outputs with the exception of the final transpose convolutional layer, which has a tanh activation function applied to the output. Also, the residual blocks are made of convolutional and batch normalization layers, which we'll go over in more detail, next.


Residual Block Class

To define the generators, you're expected to define a ResidualBlock class which will help you connect the encoder and decoder portions of the generators. You might be wondering, what exactly is a Resnet block? It may sound familiar from something like ResNet50 for image classification, pictured below.

ResNet blocks rely on connecting the output of one layer with the input of an earlier layer. The motivation for this structure is as follows: very deep neural networks can be difficult to train. Deeper networks are more likely to have vanishing or exploding gradients and, therefore, have trouble reaching convergence; batch normalization helps with this a bit. However, during training, we often see that deep networks respond with a kind of training degradation. Essentially, the training accuracy stops improving and gets saturated at some point during training. In the worst cases, deep models would see their training accuracy actually worsen over time!

One solution to this problem is to use Resnet blocks that allow us to learn so-called residual functions as they are applied to layer inputs. You can read more about this proposed architecture in the paper, Deep Residual Learning for Image Recognition by Kaiming He et. al, and the below image is from that paper.

Residual Functions

Usually, when we create a deep learning model, the model (several layers with activations applied) is responsible for learning a mapping, M, from an input x to an output y.

M(x) = y (Equation 1)

Instead of learning a direct mapping from x to y, we can instead define a residual function

F(x) = M(x) - x

This looks at the difference between a mapping applied to x and the original input, x. F(x) is, typically, two convolutional layers + normalization layer and a ReLu in between. These convolutional layers should have the same number of inputs as outputs. This mapping can then be written as the following; a function of the residual function and the input x. The addition step creates a kind of loop that connects the input x to the output, y:

M(x) = F(x) + x (Equation 2) or

y = F(x) + x (Equation 3)

Optimizing a Residual Function

The idea is that it is easier to optimize this residual function F(x) than it is to optimize the original mapping M(x). Consider an example; what if we want y = x?

From our first, direct mapping equation, Equation 1, we could set M(x) = x but it is easier to solve the residual equation F(x) = 0, which, when plugged in to Equation 3, yields y = x.

Defining the ResidualBlock Class

To define the ResidualBlock class, we'll define residual functions (a series of layers), apply them to an input x and add them to that same input. This is defined just like any other neural network, with an __init__ function and the addition step in the forward function.

In our case, you'll want to define the residual block as:

  • Two convolutional layers with the same size input and output
  • Batch normalization applied to the outputs of the convolutional layers
  • A ReLu function on the output of the first convolutional layer

Then, in the forward function, add the input x to this residual block. Feel free to use the helper conv function from above to create this block.

In [13]:
class ResNetBlock(nn.Module):
    """Defines a ResNet.
    """

    def __init__(self, channels, padding_type, norm_layer, use_dropout, use_bias):
        """Constructs a ResNet.

        A ResNet is a residual block with a skip connection.
        Construct a ResNet with "get_residual_block" function,
        and implement the skip connection in "forward" function.
        """

        super(ResNetBlock, self).__init__()
        self.model = self.get_residual_block(channels, padding_type, norm_layer, use_dropout, use_bias)


    def get_residual_block(self, channels, padding_type, norm_layer, use_dropout, use_bias):
        """Construct residual blcok.

        Parameters:
            channels (int)      -- number of channels in conv layer.
            padding_type (str)  -- name of padding layer: reflect | replicate | zero
            norm_layer          -- normalization layer
            use_dropout (bool)  -- if use dropout layers.
            use_bias (bool)     -- if conv layer uses bias or not

        Returns a residual block (with conv layers, normalization layers, and a non-linearity layer).
        """

        residual_block = []
        padding = 0

        if padding_type == 'reflect':
            residual_block += [nn.ReflectionPad2d(1)]
        elif padding_type == 'replicate':
            residual_block += [nn.ReplicationPad2d(1)]
        elif padding_type == 'zero':
            padding = 1
        else:
            raise NotImplementedError('padding [%s] is not implemented' % padding_type)

        residual_block += [nn.Conv2d(channels, channels, kernel_size=3, padding=padding, bias=use_bias),
                           norm_layer(channels),
                           nn.ReLU(True)]

        if use_dropout:
            residual_block += [nn.Dropout(0.5)]

        padding = 0

        if padding_type == 'reflect':
            residual_block += [nn.ReflectionPad2d(1)]
        elif padding_type == 'replicate':
            residual_block += [nn.ReplicationPad2d(1)]
        elif padding_type == 'zero':
            padding = 1
        else:
            raise NotImplementedError('padding [%s] is not implemented' % padding_type)

        residual_block += [nn.Conv2d(channels, channels, kernel_size=3, padding=padding, bias=use_bias),
                           norm_layer(channels)]

        return nn.Sequential(*residual_block)


    def forward(self, x):
        out = x + self.model(x)  # add a skip connection
        return out
In [14]:
class UNetBlock(nn.Module):
    """Defines an U-Net and its subnets with skip connection.
    """

    def __init__(self, out_channels, n_filters, in_channels=None,
                 subnet=None, outermost=False, innermost=False, 
                 norm_layer=get_norm_layer('batch'), use_dropout=False):
        """Constructs an U-Net and its subnets with skip connections.

        Parameters:
            out_channels (int)  -- number of channels in output conv layer
            n_filters (int)     -- number of channels in inner conv layer
            in_channels (int)   -- number of channels in input images
            subnet (UnetBlock)  -- previously defined subnets
            outermost (bool)    -- if this U-Net block is the outermost block
            innermost (bool)    -- if this U-Net block is the innermost block
            norm_layer          -- normalization layer
            use_dropout (bool)  -- if use dropout layers
        """

        super(UNetBlock, self).__init__()

        self.outermost = outermost

        if type(norm_layer) == functools.partial:
            use_bias = norm_layer.func == nn.InstanceNorm2d
        else:
            use_bias = norm_layer == nn.InstanceNorm2d

        if in_channels is None:
            in_channels = out_channels

        down_relu = nn.LeakyReLU(0.2, True)
        down_conv = nn.Conv2d(in_channels, n_filters, kernel_size=4,
                              stride=2, padding=1, bias=use_bias)
        down_norm = norm_layer(n_filters)

        up_relu = nn.ReLU(True)
        up_norm = norm_layer(out_channels)

        if outermost:
            up_conv = nn.ConvTranspose2d(n_filters * 2, out_channels, kernel_size=4,
                                         stride=2, padding=1)
            down = [down_conv]
            up = [up_relu, up_conv, nn.Tanh()]
            model = down + [subnet] + up
        elif innermost:
            up_conv = nn.ConvTranspose2d(n_filters, out_channels, kernel_size=4,
                                         stride=2, padding=1, bias=use_bias)
            down = [down_relu, down_conv]
            up = [up_relu, up_conv, up_norm]
            model = down + up
        else:
            up_conv = nn.ConvTranspose2d(n_filters * 2, out_channels, kernel_size=4,
                                         stride=2, padding=1, bias=use_bias)
            down = [down_relu, down_conv, down_norm]
            up = [up_relu, up_conv, up_norm]

            if use_dropout:
                model = down + [subnet] + up + [nn.Dropout(0.5)]
            else:
                model = down + [subnet] + up

        self.model = nn.Sequential(*model)


    def forward(self, x):
        if self.outermost:
            return self.model(x)
        else:  # add skip connections
            return torch.cat([x, self.model(x)], 1)

Transpose Convolutional Helper Function

To define the generators, you're expected to use the above conv function, ResidualBlock class, and the below deconv helper function, which creates a transpose convolutional layer + an optional batchnorm layer.

In [15]:
# helper deconv function
def deconv(in_channels, out_channels, kernel_size,
           norm_laye, use_bias,
           stride=2, padding=1, output_padding=1):
    """Creates a transpose convolutional layer, with an optional normalization layer.
    """

    layers = [nn.ConvTranspose2d(in_channels, out_channels,
                                 kernel_size, stride,
                                 padding, output_padding, bias=use_bias)]

    # optional batch norm layer
    if norm_layer != None:
        layers += [norm_layer(out_channels)]

    return nn.Sequential(*layers)

Define the Generator Architecture

  • Complete the __init__ function with the specified 3 layer encoder convolutional net, a series of residual blocks (the number of which is given by n_blocks), and then a 3 layer decoder transpose convolutional net.
  • Then complete the forward function to define the forward behavior of the generators. Recall that the last layer has a tanh activation function.

Both $G_{XtoY}$ and $G_{YtoX}$ have the same architecture, so we only need to define one class, and later instantiate two generators.

In [16]:
class ResNetGenerator(nn.Module):
    """Defines a ResNet based generator

    that consists of residual blocks between a few downsampling/upsampling operations.
    """

    def __init__(self, in_channels, out_channels, n_filters=64,
                 n_blocks=9, norm_layer=get_norm_layer('batch'),
                 use_dropout=False, padding_type='reflect'):
        """Constructs a ResNet based generator.

        Parameters:
            in_channels (int)   -- number of channels in input images
            out_channels (int)  -- number of channels in output images
            n_filters (int)     -- number of channels in the last conv layer
            n_blocks (int)      -- number of residual blocks
            norm_layer          -- normalization layer
            use_dropout (bool)  -- if use dropout layers
            padding_type (str)  -- name of padding layer in conv layers: reflect | replicate | zero
        """

        assert(n_blocks >= 0)
        super(ResNetGenerator, self).__init__()

        if type(norm_layer) == functools.partial:
            use_bias = norm_layer.func == nn.InstanceNorm2d
        else:
            use_bias = norm_layer == nn.InstanceNorm2d

        model = [nn.ReflectionPad2d(3),
                 nn.Conv2d(in_channels, n_filters, kernel_size=7, padding=0, bias=use_bias),
                 norm_layer(n_filters),
                 nn.ReLU(True)]

        n_downs = 2

        for i in range(n_downs):  # add downsampling layers
            mult = 2 ** i
            model += [nn.Conv2d(n_filters * mult, n_filters * mult * 2, kernel_size=3,
                                stride=2, padding=1, bias=use_bias),
                      norm_layer(n_filters * mult * 2),
                      nn.ReLU(True)]

        mult = 2 ** n_downs

        for i in range(n_blocks):  # add ResNet blocks
            model += [ResNetBlock(n_filters * mult, padding_type=padding_type, norm_layer=norm_layer,
                                  use_dropout=use_dropout, use_bias=use_bias)]

        for i in range(n_downs):  # add upsampling layers
            mult = 2 ** (n_downs - i)
            model += [nn.ConvTranspose2d(n_filters * mult, int(n_filters*mult / 2), kernel_size=3,
                                         stride=2, padding=1, output_padding=1, bias=use_bias),
                      norm_layer(int(n_filters*mult / 2)),
                      nn.ReLU(True)]

        model += [nn.ReflectionPad2d(3)]
        model += [nn.Conv2d(n_filters, out_channels, kernel_size=7, padding=0)]
        model += [nn.Tanh()]

        self.model = nn.Sequential(*model)


    def forward(self, x):
        return self.model(x)
In [17]:
class UNetGenerator(nn.Module):
    """Creates a U-Net based generator
    """

    def __init__(self, in_channels, out_channels, n_filters=64,
                 n_downs=8, norm_layer=get_norm_layer('batch'), use_dropout=False):
        """Constructs a U-Net based generator.

        Parameters:
            in_channels (int)   -- number of channels in input image
            out_channels (int)  -- number of channels in output image
            n_filters (int)     -- number of filters in the last conv layer
            n_downs (int)       -- number of downsamplings in U-Net.
            norm_layer          -- normalization layer
            use_dropout (bool)  -- if use dropout layers

        Constructs an U-Net from the innermost layer to the outermost layer.
        It is a recursive process.
        """

        super(UNetGenerator, self).__init__()

        # construct a U-Net structure
        # add the innermost layer
        model = UNetBlock(n_filters * 8, n_filters * 8, in_channels=None,
                          subnet=None, norm_layer=norm_layer, innermost=True)

        for i in range(n_downs - 5):  # add inner layers with inner_nc * 8 channels
            model = UNetBlock(n_filters * 8, n_filters * 8, in_channels=None,
                              subnet=model, norm_layer=norm_layer, use_dropout=use_dropout)

        # gradually reduce the number of channels from inner_nc * 8 to inner_nc
        model = UNetBlock(n_filters * 4, n_filters * 8, in_channels=None,
                          subnet=model, norm_layer=norm_layer)

        model = UNetBlock(n_filters * 2, n_filters * 4, in_channels=None,
                          subnet=model, norm_layer=norm_layer)

        model = UNetBlock(n_filters, n_filters * 2, in_channels=None,
                          subnet=model, norm_layer=norm_layer)

        # add the outermost layer
        self.model = UNetBlock(out_channels, n_filters, in_channels=in_channels,
                               subnet=model, outermost=True, norm_layer=norm_layer)


    def forward(self, x):
        return self.model(x)

Create the complete network

Using the classes you defined earlier, you can define the discriminators and generators necessary to create a complete CycleGAN. The given parameters should work for training.

First, create two discriminators, one for checking if $X$ sample images are real, and one for checking if $Y$ sample images are real. Then the generators. Instantiate two of them, one for transforming a painting into a realistic photo and one for transforming a photo into a painting.

In [18]:
def create_model(generator_type, in_channels=3, out_channels=3, **lookup):
    """Builds the generators and discriminators.
    """

    # instantiate generators
    if generator_type == 'U-Net':
        G_XtoY = UNetGenerator(in_channels, out_channels, **lookup)
        G_YtoX = UNetGenerator(in_channels, out_channels, **lookup)
    elif generator_type == 'ResNet':
        G_XtoY = ResNetGenerator(in_channels, out_channels, **lookup)
        G_YtoX = ResNetGenerator(in_channels, out_channels, **lookup)

    # instantiate discriminators
    D_X = Discriminator(in_channels, **lookup)
    D_Y = Discriminator(in_channels, **lookup)

    # move models to GPU, if available
    if torch.cuda.is_available():
        device = torch.device('cuda:0')

        G_XtoY.to(device)
        G_YtoX.to(device)
        D_X.to(device)
        D_Y.to(device)

        print('Models moved to GPU.')
    else:
        print('Only CPU available.')

    return G_XtoY, G_YtoX, D_X, D_Y
In [19]:
def init_weights(net, init_type, init_gain=0.02):
    """Initialize network weights.

    Parameters:
        net (network)      -- network to be initialized
        init_type (str)    -- name of initialization method: normal | xavier | kaiming | orthogonal
        init_gain (float)  -- scaling factor for normal, xavier and orthogonal

    Use 'normal' in the original pix2pix and CycleGAN paper.
    But xavier and kaiming might swork better for some applications.
    Feel free to try yourself.
    """

    # define the initialization function
    def init_func(m):
        classname = m.__class__.__name__

        if hasattr(m, 'weight') and (classname.find('Conv') != -1):
            if init_type == 'normal':
                nn.init.normal_(m.weight.data, 0.0, init_gain)
            elif init_type == 'xavier':
                nn.init.xavier_normal_(m.weight.data, gain=init_gain)
            elif init_type == 'kaiming':
                nn.init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
            elif init_type == 'orthogonal':
                nn.init.orthogonal_(m.weight.data, gain=init_gain)
            else:
                raise NotImplementedError('initialization method [%s] is not implemented' % init_type)

            if hasattr(m, 'bias') and m.bias is not None:
                nn.init.constant_(m.bias.data, 0.0)

        # BatchNorm layer's weight is not a matrix; only normal distribution applies.
        elif classname.find('BatchNorm2d') != -1:
            nn.init.normal_(m.weight.data, 1.0, init_gain)
            nn.init.constant_(m.bias.data, 0.0)

    print('initialize network with %s' % init_type)

    # apply the initialization function <init_func>
    net.apply(init_func)
In [20]:
# call the function to get models
G_XtoY, G_YtoX, D_X, D_Y = create_model(generator_type='U-Net')
Models moved to GPU.
In [21]:
init_weights(G_XtoY, init_type='orthogonal')
init_weights(G_YtoX, init_type='orthogonal')
init_weights(D_X, init_type='orthogonal')
init_weights(D_Y, init_type='orthogonal')
initialize network with orthogonal
initialize network with orthogonal
initialize network with orthogonal
initialize network with orthogonal

Check that you've implemented this correctly

The function create_model should return the two generator and two discriminator networks. After you've defined these discriminator and generator components, it's good practice to check your work. The easiest way to do this is to print out your model architecture and read through it to make sure the parameters are what you expected. The next cell will print out their architectures.

In [22]:
# helper function for printing the model architecture
def print_models(G_XtoY, G_YtoX, D_X, D_Y):
    """Prints model information for the generators and discriminators.
    """

    print("                     G_XtoY                    ")
    print("-----------------------------------------------")
    print(G_XtoY)
    print()

    print("                     G_YtoX                    ")
    print("-----------------------------------------------")
    print(G_YtoX)
    print()

    print("                      D_X                      ")
    print("-----------------------------------------------")
    print(D_X)
    print()

    print("                      D_Y                      ")
    print("-----------------------------------------------")
    print(D_Y)
    print()

# print all of the models
print_models(G_XtoY, G_YtoX, D_X, D_Y)
                     G_XtoY                    
-----------------------------------------------
UNetGenerator(
  (model): UNetBlock(
    (model): Sequential(
      (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (1): UNetBlock(
        (model): Sequential(
          (0): LeakyReLU(negative_slope=0.2, inplace=True)
          (1): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
          (2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (3): UNetBlock(
            (model): Sequential(
              (0): LeakyReLU(negative_slope=0.2, inplace=True)
              (1): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
              (2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (3): UNetBlock(
                (model): Sequential(
                  (0): LeakyReLU(negative_slope=0.2, inplace=True)
                  (1): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                  (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                  (3): UNetBlock(
                    (model): Sequential(
                      (0): LeakyReLU(negative_slope=0.2, inplace=True)
                      (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                      (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                      (3): UNetBlock(
                        (model): Sequential(
                          (0): LeakyReLU(negative_slope=0.2, inplace=True)
                          (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                          (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                          (3): UNetBlock(
                            (model): Sequential(
                              (0): LeakyReLU(negative_slope=0.2, inplace=True)
                              (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                              (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                              (3): UNetBlock(
                                (model): Sequential(
                                  (0): LeakyReLU(negative_slope=0.2, inplace=True)
                                  (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                                  (2): ReLU(inplace=True)
                                  (3): ConvTranspose2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                                  (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                )
                              )
                              (4): ReLU(inplace=True)
                              (5): ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                              (6): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            )
                          )
                          (4): ReLU(inplace=True)
                          (5): ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                          (6): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                        )
                      )
                      (4): ReLU(inplace=True)
                      (5): ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                      (6): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                    )
                  )
                  (4): ReLU(inplace=True)
                  (5): ConvTranspose2d(1024, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                  (6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                )
              )
              (4): ReLU(inplace=True)
              (5): ConvTranspose2d(512, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
              (6): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            )
          )
          (4): ReLU(inplace=True)
          (5): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
          (6): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (2): ReLU(inplace=True)
      (3): ConvTranspose2d(128, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
      (4): Tanh()
    )
  )
)

                     G_YtoX                    
-----------------------------------------------
UNetGenerator(
  (model): UNetBlock(
    (model): Sequential(
      (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (1): UNetBlock(
        (model): Sequential(
          (0): LeakyReLU(negative_slope=0.2, inplace=True)
          (1): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
          (2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (3): UNetBlock(
            (model): Sequential(
              (0): LeakyReLU(negative_slope=0.2, inplace=True)
              (1): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
              (2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (3): UNetBlock(
                (model): Sequential(
                  (0): LeakyReLU(negative_slope=0.2, inplace=True)
                  (1): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                  (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                  (3): UNetBlock(
                    (model): Sequential(
                      (0): LeakyReLU(negative_slope=0.2, inplace=True)
                      (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                      (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                      (3): UNetBlock(
                        (model): Sequential(
                          (0): LeakyReLU(negative_slope=0.2, inplace=True)
                          (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                          (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                          (3): UNetBlock(
                            (model): Sequential(
                              (0): LeakyReLU(negative_slope=0.2, inplace=True)
                              (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                              (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                              (3): UNetBlock(
                                (model): Sequential(
                                  (0): LeakyReLU(negative_slope=0.2, inplace=True)
                                  (1): Conv2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                                  (2): ReLU(inplace=True)
                                  (3): ConvTranspose2d(512, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                                  (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                )
                              )
                              (4): ReLU(inplace=True)
                              (5): ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                              (6): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            )
                          )
                          (4): ReLU(inplace=True)
                          (5): ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                          (6): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                        )
                      )
                      (4): ReLU(inplace=True)
                      (5): ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                      (6): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                    )
                  )
                  (4): ReLU(inplace=True)
                  (5): ConvTranspose2d(1024, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
                  (6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                )
              )
              (4): ReLU(inplace=True)
              (5): ConvTranspose2d(512, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
              (6): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            )
          )
          (4): ReLU(inplace=True)
          (5): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
          (6): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (2): ReLU(inplace=True)
      (3): ConvTranspose2d(128, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
      (4): Tanh()
    )
  )
)

                      D_X                      
-----------------------------------------------
Discriminator(
  (model): Sequential(
    (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.2, inplace=True)
    (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (4): LeakyReLU(negative_slope=0.2, inplace=True)
    (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (7): LeakyReLU(negative_slope=0.2, inplace=True)
    (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1), bias=False)
    (9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (10): LeakyReLU(negative_slope=0.2, inplace=True)
    (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1))
  )
)

                      D_Y                      
-----------------------------------------------
Discriminator(
  (model): Sequential(
    (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.2, inplace=True)
    (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (4): LeakyReLU(negative_slope=0.2, inplace=True)
    (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (7): LeakyReLU(negative_slope=0.2, inplace=True)
    (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1), bias=False)
    (9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (10): LeakyReLU(negative_slope=0.2, inplace=True)
    (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1))
  )
)

Discriminator and Generator Losses

Computing the discriminator and the generator losses are key to getting a CycleGAN to train.

Image from original paper by Jun-Yan Zhu et. al.

  • The CycleGAN contains two mapping functions $G: X \rightarrow Y$ and $F: Y \rightarrow X$, and associated adversarial discriminators $D_Y$ and $D_X$. (a) $D_Y$ encourages $G$ to translate $X$ into outputs indistinguishable from domain $Y$, and vice versa for $D_X$ and $F$.

  • To further regularize the mappings, we introduce two cycle consistency losses that capture the intuition that if we translate from one domain to the other and back again we should arrive at where we started. (b) Forward cycle-consistency loss and (c) backward cycle-consistency loss.

Least Squares GANs

We've seen that regular GANs treat the discriminator as a classifier with the sigmoid cross entropy loss function. However, this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we'll use a least squares loss function for the discriminator. This structure is also referred to as a least squares GAN or LSGAN, and you can read the original paper on LSGANs, here. The authors show that LSGANs are able to generate higher quality images than regular GANs and that this loss type is a bit more stable during training!

Discriminator Losses

The discriminator losses will be mean squared errors between the output of the discriminator, given an image, and the target value, 0 or 1, depending on whether it should classify that image as fake or real. For example, for a real image, x, we can train $D_X$ by looking at how close it is to recognizing and image x as real using the mean squared error:

out_x = D_X(x)
real_err = torch.mean((out_x-1)**2)

Generator Losses

Calculating the generator losses will look somewhat similar to calculating the discriminator loss; there will still be steps in which you generate fake images that look like they belong to the set of $X$ images but are based on real images in set $Y$, and vice versa. You'll compute the "real loss" on those generated images by looking at the output of the discriminator as it's applied to these fake images; this time, your generator aims to make the discriminator classify these fake images as real images.

Cycle Consistency Loss

In addition to the adversarial losses, the generator loss terms will also include the cycle consistency loss. This loss is a measure of how good a reconstructed image is, when compared to an original image.

Say you have a fake, generated image, x_hat, and a real image, y. You can get a reconstructed y_hat by applying G_XtoY(x_hat) = y_hat and then check to see if this reconstruction y_hat and the orginal image y match. For this, we recommend calculating the L1 loss, which is an absolute difference, between reconstructed and real images. You may also choose to multiply this loss by some weight value lambda_weight to convey its importance.

The total generator loss will be the sum of the generator losses and the forward and backward cycle consistency losses.


Define Loss Functions

To help us calculate the discriminator and gnerator losses during training, let's define some helpful loss functions. Here, we'll define three.

  1. real_mse_loss that looks at the output of a discriminator and returns the error based on how close that output is to being classified as real. This should be a mean squared error.
  2. fake_mse_loss that looks at the output of a discriminator and returns the error based on how close that output is to being classified as fake. This should be a mean squared error.
  3. cycle_consistency_loss that looks at a set of real image and a set of reconstructed/generated images, and returns the mean absolute error between them. This has a lambda_weight parameter that will weight the mean absolute error in a batch.

It's recommended that you take a look at the original, CycleGAN paper to get a starting value for lambda_weight.

In [23]:
def real_mse_loss(D_out):
    # how close is the produced output from being "real"?
    return torch.mean((D_out - 1)**2)

def fake_mse_loss(D_out):
    # how close is the produced output from being "fake"?
    return torch.mean(D_out**2)

def cycle_consistency_loss(real_img, reconstructed_img, lambda_weight):
    # calculate reconstruction loss and return weighted loss
    return lambda_weight * torch.mean(torch.abs(reconstructed_img - real_img))

Define the Optimizers

Next, let's define how this model will update its weights. This, like the GANs you may have seen before, uses Adam optimizers for the discriminator and generator. It's again recommended that you take a look at the original, CycleGAN paper to get starting hyperparameter values.

In [24]:
import torch.optim as optim

# hyperparams for Adam optimizers
lr=0.0002
beta1=0.5
beta2=0.999

g_params = list(G_XtoY.parameters()) + list(G_YtoX.parameters())  # Get generator parameters

# create optimizers for the generators and discriminators
g_optimizer = optim.Adam(g_params, lr, [beta1, beta2])
d_x_optimizer = optim.Adam(D_X.parameters(), lr, [beta1, beta2])
d_y_optimizer = optim.Adam(D_Y.parameters(), lr, [beta1, beta2])

Training a CycleGAN

When a CycleGAN trains, and sees one batch of real images from set $X$ and $Y$, it trains by performing the following steps:

Training the Discriminators

  1. Compute the discriminator $D_X$ loss on real images
  2. Generate fake images that look like domain $X$ based on real images in domain $Y$
  3. Compute the fake loss for $D_X$
  4. Compute the total loss and perform backpropagation and $D_X$ optimization
  5. Repeat steps 1-4 only with $D_Y$ and your domains switched!

Training the Generators

  1. Generate fake images that look like domain $X$ based on real images in domain $Y$
  2. Compute the generator loss based on how $D_X$ responds to fake $X$
  3. Generate reconstructed $\hat{Y}$ images based on the fake $X$ images generated in step 1
  4. Compute the cycle consistency loss by comparing the reconstructions with real $Y$ images
  5. Repeat steps 1-4 only swapping domains
  6. Add up all the generator and reconstruction losses and perform backpropagation + optimization

Saving Your Progress

A CycleGAN repeats its training process, alternating between training the discriminators and the generators, for a specified number of training iterations. You've been given code that will save some example generated images that the CycleGAN has learned to generate after a certain number of training iterations. Along with looking at the losses, these example generations should give you an idea of how well your network has trained.

Below, you may choose to keep all default parameters; your only task is to calculate the appropriate losses and complete the training cycle.

In [25]:
import sys
helpers_path = '../input/cyclegan'
sys.path.append(helpers_path)

from importlib import util
spec = util.spec_from_file_location('helpers', helpers_path + '/helpers.py')
helpers = util.module_from_spec(spec)
spec.loader.exec_module(helpers)
In [26]:
# import save code
from helpers import save_samples, checkpoint
In [27]:
# train the network
def training_loop(dataloader_X, dataloader_Y, test_dataloader_X, test_dataloader_Y, n_epochs=4000):

    print_every=10

    # keep track of losses over time
    losses = []

    test_iter_X = iter(test_dataloader_X)
    test_iter_Y = iter(test_dataloader_Y)

    # Get some fixed data from domains X and Y for sampling.
    # These are images that are held constant throughout training,
    # that allow us to inspect the model's performance.
    fixed_X = test_iter_X.next()[0]
    fixed_Y = test_iter_Y.next()[0]
    fixed_X = scale(fixed_X)  # make sure to scale to a range -1 to 1
    fixed_Y = scale(fixed_Y)

    # batches per epoch
    iter_X = iter(dataloader_X)
    iter_Y = iter(dataloader_Y)
    batches_per_epoch = min(len(iter_X), len(iter_Y))

    for epoch in range(1, n_epochs+1):

        # reset iterators for each epoch
        if epoch % batches_per_epoch == 0:
            iter_X = iter(dataloader_X)
            iter_Y = iter(dataloader_Y)

        images_X, _ = iter_X.next()
        images_X = scale(images_X)  # make sure to scale to a range -1 to 1

        images_Y, _ = iter_Y.next()
        images_Y = scale(images_Y)

        # move images to GPU if available (otherwise stay on CPU)
        device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
        images_X = images_X.to(device)
        images_Y = images_Y.to(device)


        # ============================================
        #            TRAIN THE DISCRIMINATORS
        # ============================================

        ##   First: D_X, real and fake loss components    ##
        d_x_optimizer.zero_grad()

        # 1. Compute the discriminator losses on real images
        # 2. Generate fake images that look like domain X based on real images in domain Y
        # 3. Compute the fake loss for D_X
        # 4. Compute the total loss and perform backprop
        d_x_loss = real_mse_loss(D_X(images_X)) + fake_mse_loss(D_X(G_YtoX(images_Y)))
        d_x_loss.backward()
        d_x_optimizer.step()


        ##   Second: D_Y, real and fake loss components    ##
        d_y_optimizer.zero_grad()

        # 1. Compute the discriminator losses on real images
        # 2. Generate fake images that look like domain Y based on real images in domain X
        # 3. Compute the fake loss for D_Y
        # 4. Compute the total loss and perform backprop
        d_y_loss = real_mse_loss(D_Y(images_Y)) + fake_mse_loss(D_Y(G_XtoY(images_X)))
        d_y_loss.backward()
        d_y_optimizer.step()


        # =========================================
        #            TRAIN THE GENERATORS
        # =========================================

        ##   First: generate fake X images and reconstructed Y images    ##
        g_optimizer.zero_grad()

        # 1. Generate fake images that look like domain X based on real images in domain Y
        # 2. Compute the generator loss based on domain X
        # 3. Create a reconstructed Y
        # 4. Compute the cycle consistency loss (the reconstruction loss)
        fake_X = G_YtoX(images_Y)
        g_YtoX_loss = real_mse_loss(D_X(fake_X))
        reconst_X_loss = cycle_consistency_loss(images_Y, G_XtoY(fake_X), lambda_weight=10)


        ##   Second: generate fake Y images and reconstructed X images    ##

        # 1. Generate fake images that look like domain Y based on real images in domain X
        # 2. Compute the generator loss based on domain Y
        # 3. Create a reconstructed X
        # 4. Compute the cycle consistency loss (the reconstruction loss)
        fake_Y = G_XtoY(images_X)
        g_XtoY_loss = real_mse_loss(D_Y(fake_Y))
        reconst_Y_loss = cycle_consistency_loss(images_X, G_YtoX(fake_Y), lambda_weight=10)

        # 5. Add up all generator and reconstructed losses and perform backprop
        g_total_loss = g_YtoX_loss + reconst_X_loss + g_XtoY_loss + reconst_Y_loss
        g_total_loss.backward()
        g_optimizer.step()


        # Print the log info
        if epoch % print_every == 0:
            # append real and fake discriminator losses and the generator loss
            losses.append((d_x_loss.item(), d_y_loss.item(), g_total_loss.item()))
            print('Epoch [{:5d}/{:5d}] | d_X_loss: {:6.4f} | d_Y_loss: {:6.4f} | g_total_loss: {:6.4f}'.format(
                    epoch, n_epochs, d_x_loss.item(), d_y_loss.item(), g_total_loss.item()))

        sample_every=100

        # Save the generated samples
        if epoch % sample_every == 0:
            G_YtoX.eval()  # set generators to eval mode for sample generation
            G_XtoY.eval()
            save_samples(epoch, fixed_Y, fixed_X, G_YtoX, G_XtoY, batch_size=16)
            G_YtoX.train()
            G_XtoY.train()

        # uncomment these lines if you want to save your model
        checkpoint_every=1000

        # save the model parameters
        if epoch % checkpoint_every == 0:
            checkpoint(epoch, G_XtoY, G_YtoX, D_X, D_Y)

    return losses
In [28]:
%%bash
mkdir -p samples_cyclegan
mkdir -p checkpoints_cyclegan
In [29]:
n_epochs = 4000  # keep this small when testing if a model first works, then increase it to >=1000
losses = training_loop(dataloader_X, dataloader_Y, test_dataloader_X, test_dataloader_Y, n_epochs=n_epochs)
Epoch [   10/ 4000] | d_X_loss: 0.5181 | d_Y_loss: 0.5336 | g_total_loss: 8.8130
Epoch [   20/ 4000] | d_X_loss: 0.5835 | d_Y_loss: 0.6285 | g_total_loss: 6.9994
Epoch [   30/ 4000] | d_X_loss: 0.5704 | d_Y_loss: 0.5361 | g_total_loss: 5.1411
Epoch [   40/ 4000] | d_X_loss: 0.4841 | d_Y_loss: 0.5785 | g_total_loss: 4.6519
Epoch [   50/ 4000] | d_X_loss: 0.4475 | d_Y_loss: 0.5513 | g_total_loss: 4.7550
Epoch [   60/ 4000] | d_X_loss: 0.4401 | d_Y_loss: 0.4629 | g_total_loss: 4.7132
Epoch [   70/ 4000] | d_X_loss: 0.4225 | d_Y_loss: 0.5781 | g_total_loss: 4.5197
Epoch [   80/ 4000] | d_X_loss: 0.5337 | d_Y_loss: 0.4210 | g_total_loss: 4.0864
Epoch [   90/ 4000] | d_X_loss: 0.3793 | d_Y_loss: 0.4790 | g_total_loss: 3.9931
Epoch [  100/ 4000] | d_X_loss: 0.4255 | d_Y_loss: 0.5161 | g_total_loss: 4.5544
Saved samples_cyclegan/sample-000100-X-Y.png
Saved samples_cyclegan/sample-000100-Y-X.png
Epoch [  110/ 4000] | d_X_loss: 0.3138 | d_Y_loss: 0.5943 | g_total_loss: 4.2883
Epoch [  120/ 4000] | d_X_loss: 0.3864 | d_Y_loss: 0.3784 | g_total_loss: 4.3741
Epoch [  130/ 4000] | d_X_loss: 0.3799 | d_Y_loss: 0.3943 | g_total_loss: 4.2291
Epoch [  140/ 4000] | d_X_loss: 0.4442 | d_Y_loss: 0.4672 | g_total_loss: 4.4667
Epoch [  150/ 4000] | d_X_loss: 0.3990 | d_Y_loss: 0.4010 | g_total_loss: 4.5174
Epoch [  160/ 4000] | d_X_loss: 0.4855 | d_Y_loss: 0.3377 | g_total_loss: 4.2878
Epoch [  170/ 4000] | d_X_loss: 0.3995 | d_Y_loss: 0.2750 | g_total_loss: 3.9469
Epoch [  180/ 4000] | d_X_loss: 0.3947 | d_Y_loss: 0.3554 | g_total_loss: 4.1522
Epoch [  190/ 4000] | d_X_loss: 0.5652 | d_Y_loss: 0.4221 | g_total_loss: 4.4897
Epoch [  200/ 4000] | d_X_loss: 0.4148 | d_Y_loss: 0.3345 | g_total_loss: 4.0940
Saved samples_cyclegan/sample-000200-X-Y.png
Saved samples_cyclegan/sample-000200-Y-X.png
Epoch [  210/ 4000] | d_X_loss: 0.4333 | d_Y_loss: 0.3511 | g_total_loss: 4.0522
Epoch [  220/ 4000] | d_X_loss: 0.4100 | d_Y_loss: 0.3743 | g_total_loss: 4.2802
Epoch [  230/ 4000] | d_X_loss: 0.4660 | d_Y_loss: 0.4041 | g_total_loss: 3.2236
Epoch [  240/ 4000] | d_X_loss: 0.4092 | d_Y_loss: 0.5445 | g_total_loss: 3.8303
Epoch [  250/ 4000] | d_X_loss: 0.4335 | d_Y_loss: 0.4667 | g_total_loss: 3.4410
Epoch [  260/ 4000] | d_X_loss: 0.4527 | d_Y_loss: 0.4493 | g_total_loss: 3.4002
Epoch [  270/ 4000] | d_X_loss: 0.4325 | d_Y_loss: 0.4118 | g_total_loss: 3.6655
Epoch [  280/ 4000] | d_X_loss: 0.4285 | d_Y_loss: 0.3992 | g_total_loss: 3.0380
Epoch [  290/ 4000] | d_X_loss: 0.4132 | d_Y_loss: 0.4694 | g_total_loss: 3.1027
Epoch [  300/ 4000] | d_X_loss: 0.5118 | d_Y_loss: 0.5041 | g_total_loss: 3.1947
Saved samples_cyclegan/sample-000300-X-Y.png
Saved samples_cyclegan/sample-000300-Y-X.png
Epoch [  310/ 4000] | d_X_loss: 0.4966 | d_Y_loss: 0.5559 | g_total_loss: 3.1344
Epoch [  320/ 4000] | d_X_loss: 0.4754 | d_Y_loss: 0.4499 | g_total_loss: 3.2739
Epoch [  330/ 4000] | d_X_loss: 0.4426 | d_Y_loss: 0.5058 | g_total_loss: 3.7178
Epoch [  340/ 4000] | d_X_loss: 0.4894 | d_Y_loss: 0.5181 | g_total_loss: 3.4212
Epoch [  350/ 4000] | d_X_loss: 0.4784 | d_Y_loss: 0.4742 | g_total_loss: 3.0517
Epoch [  360/ 4000] | d_X_loss: 0.4524 | d_Y_loss: 0.4743 | g_total_loss: 3.1096
Epoch [  370/ 4000] | d_X_loss: 0.4592 | d_Y_loss: 0.4856 | g_total_loss: 3.2708
Epoch [  380/ 4000] | d_X_loss: 0.4459 | d_Y_loss: 0.4889 | g_total_loss: 3.0371
Epoch [  390/ 4000] | d_X_loss: 0.4488 | d_Y_loss: 0.4796 | g_total_loss: 3.1479
Epoch [  400/ 4000] | d_X_loss: 0.4869 | d_Y_loss: 0.8472 | g_total_loss: 3.3972
Saved samples_cyclegan/sample-000400-X-Y.png
Saved samples_cyclegan/sample-000400-Y-X.png
Epoch [  410/ 4000] | d_X_loss: 0.4472 | d_Y_loss: 0.4930 | g_total_loss: 2.8955
Epoch [  420/ 4000] | d_X_loss: 0.4767 | d_Y_loss: 0.4752 | g_total_loss: 3.0709
Epoch [  430/ 4000] | d_X_loss: 0.5004 | d_Y_loss: 0.4530 | g_total_loss: 3.1203
Epoch [  440/ 4000] | d_X_loss: 0.4842 | d_Y_loss: 0.5624 | g_total_loss: 3.1561
Epoch [  450/ 4000] | d_X_loss: 0.4605 | d_Y_loss: 0.4869 | g_total_loss: 2.9831
Epoch [  460/ 4000] | d_X_loss: 0.4733 | d_Y_loss: 0.4153 | g_total_loss: 3.0803
Epoch [  470/ 4000] | d_X_loss: 0.4782 | d_Y_loss: 0.5428 | g_total_loss: 3.3765
Epoch [  480/ 4000] | d_X_loss: 0.5250 | d_Y_loss: 0.4128 | g_total_loss: 3.0251
Epoch [  490/ 4000] | d_X_loss: 0.4991 | d_Y_loss: 0.6362 | g_total_loss: 3.2358
Epoch [  500/ 4000] | d_X_loss: 0.4991 | d_Y_loss: 0.4850 | g_total_loss: 3.0916
Saved samples_cyclegan/sample-000500-X-Y.png
Saved samples_cyclegan/sample-000500-Y-X.png
Epoch [  510/ 4000] | d_X_loss: 0.4502 | d_Y_loss: 0.6261 | g_total_loss: 2.7602
Epoch [  520/ 4000] | d_X_loss: 0.5112 | d_Y_loss: 0.4905 | g_total_loss: 2.8088
Epoch [  530/ 4000] | d_X_loss: 0.4831 | d_Y_loss: 0.5023 | g_total_loss: 2.8888
Epoch [  540/ 4000] | d_X_loss: 0.4705 | d_Y_loss: 0.5205 | g_total_loss: 2.9417
Epoch [  550/ 4000] | d_X_loss: 0.4983 | d_Y_loss: 0.5465 | g_total_loss: 3.2335
Epoch [  560/ 4000] | d_X_loss: 0.4499 | d_Y_loss: 0.4613 | g_total_loss: 2.9092
Epoch [  570/ 4000] | d_X_loss: 0.4275 | d_Y_loss: 0.4912 | g_total_loss: 3.1397
Epoch [  580/ 4000] | d_X_loss: 0.5117 | d_Y_loss: 0.5043 | g_total_loss: 3.2510
Epoch [  590/ 4000] | d_X_loss: 0.3808 | d_Y_loss: 0.4371 | g_total_loss: 2.8867
Epoch [  600/ 4000] | d_X_loss: 0.9192 | d_Y_loss: 0.5113 | g_total_loss: 3.0107
Saved samples_cyclegan/sample-000600-X-Y.png
Saved samples_cyclegan/sample-000600-Y-X.png
Epoch [  610/ 4000] | d_X_loss: 0.4127 | d_Y_loss: 0.5023 | g_total_loss: 2.8064
Epoch [  620/ 4000] | d_X_loss: 0.4424 | d_Y_loss: 0.4942 | g_total_loss: 2.7858
Epoch [  630/ 4000] | d_X_loss: 0.3788 | d_Y_loss: 0.4946 | g_total_loss: 2.8363
Epoch [  640/ 4000] | d_X_loss: 0.3957 | d_Y_loss: 0.5074 | g_total_loss: 2.8651
Epoch [  650/ 4000] | d_X_loss: 0.4364 | d_Y_loss: 0.5233 | g_total_loss: 2.7838
Epoch [  660/ 4000] | d_X_loss: 0.4329 | d_Y_loss: 0.4724 | g_total_loss: 2.6714
Epoch [  670/ 4000] | d_X_loss: 0.4538 | d_Y_loss: 0.4902 | g_total_loss: 3.3407
Epoch [  680/ 4000] | d_X_loss: 0.4229 | d_Y_loss: 0.4760 | g_total_loss: 3.1646
Epoch [  690/ 4000] | d_X_loss: 0.7226 | d_Y_loss: 0.5871 | g_total_loss: 3.2529
Epoch [  700/ 4000] | d_X_loss: 0.4692 | d_Y_loss: 0.4859 | g_total_loss: 2.6606
Saved samples_cyclegan/sample-000700-X-Y.png
Saved samples_cyclegan/sample-000700-Y-X.png
Epoch [  710/ 4000] | d_X_loss: 0.4804 | d_Y_loss: 0.4474 | g_total_loss: 2.7074
Epoch [  720/ 4000] | d_X_loss: 0.4575 | d_Y_loss: 0.4709 | g_total_loss: 2.8631
Epoch [  730/ 4000] | d_X_loss: 0.4463 | d_Y_loss: 0.4757 | g_total_loss: 2.9935
Epoch [  740/ 4000] | d_X_loss: 0.3877 | d_Y_loss: 0.4337 | g_total_loss: 2.6329
Epoch [  750/ 4000] | d_X_loss: 0.4720 | d_Y_loss: 0.4839 | g_total_loss: 2.9113
Epoch [  760/ 4000] | d_X_loss: 0.4365 | d_Y_loss: 0.5207 | g_total_loss: 2.9701
Epoch [  770/ 4000] | d_X_loss: 0.5288 | d_Y_loss: 0.4420 | g_total_loss: 2.5956
Epoch [  780/ 4000] | d_X_loss: 0.4207 | d_Y_loss: 0.5759 | g_total_loss: 2.8698
Epoch [  790/ 4000] | d_X_loss: 0.5058 | d_Y_loss: 0.4514 | g_total_loss: 2.8748
Epoch [  800/ 4000] | d_X_loss: 0.6119 | d_Y_loss: 0.4976 | g_total_loss: 2.7532
Saved samples_cyclegan/sample-000800-X-Y.png
Saved samples_cyclegan/sample-000800-Y-X.png
Epoch [  810/ 4000] | d_X_loss: 0.4633 | d_Y_loss: 0.5222 | g_total_loss: 2.8585
Epoch [  820/ 4000] | d_X_loss: 0.4548 | d_Y_loss: 0.4957 | g_total_loss: 2.8467
Epoch [  830/ 4000] | d_X_loss: 0.5346 | d_Y_loss: 0.5150 | g_total_loss: 3.0304
Epoch [  840/ 4000] | d_X_loss: 0.4357 | d_Y_loss: 0.5699 | g_total_loss: 2.8061
Epoch [  850/ 4000] | d_X_loss: 0.4868 | d_Y_loss: 0.4845 | g_total_loss: 2.9340
Epoch [  860/ 4000] | d_X_loss: 0.3779 | d_Y_loss: 0.5284 | g_total_loss: 2.8410
Epoch [  870/ 4000] | d_X_loss: 0.5580 | d_Y_loss: 0.4794 | g_total_loss: 2.8822
Epoch [  880/ 4000] | d_X_loss: 0.5268 | d_Y_loss: 0.4677 | g_total_loss: 2.7664
Epoch [  890/ 4000] | d_X_loss: 0.4383 | d_Y_loss: 0.4754 | g_total_loss: 2.7367
Epoch [  900/ 4000] | d_X_loss: 0.4907 | d_Y_loss: 0.4825 | g_total_loss: 2.8741
Saved samples_cyclegan/sample-000900-X-Y.png
Saved samples_cyclegan/sample-000900-Y-X.png
Epoch [  910/ 4000] | d_X_loss: 0.3877 | d_Y_loss: 0.4861 | g_total_loss: 2.7910
Epoch [  920/ 4000] | d_X_loss: 0.5080 | d_Y_loss: 0.5276 | g_total_loss: 2.7501
Epoch [  930/ 4000] | d_X_loss: 0.4363 | d_Y_loss: 0.4818 | g_total_loss: 2.6729
Epoch [  940/ 4000] | d_X_loss: 0.4889 | d_Y_loss: 0.4840 | g_total_loss: 2.4835
Epoch [  950/ 4000] | d_X_loss: 0.4249 | d_Y_loss: 0.4910 | g_total_loss: 2.8510
Epoch [  960/ 4000] | d_X_loss: 0.4425 | d_Y_loss: 0.4854 | g_total_loss: 2.5986
Epoch [  970/ 4000] | d_X_loss: 0.5420 | d_Y_loss: 0.4167 | g_total_loss: 2.8059
Epoch [  980/ 4000] | d_X_loss: 0.3868 | d_Y_loss: 0.4987 | g_total_loss: 2.5105
Epoch [  990/ 4000] | d_X_loss: 0.4186 | d_Y_loss: 0.5104 | g_total_loss: 2.7438
Epoch [ 1000/ 4000] | d_X_loss: 0.5307 | d_Y_loss: 0.4931 | g_total_loss: 3.0040
Saved samples_cyclegan/sample-001000-X-Y.png
Saved samples_cyclegan/sample-001000-Y-X.png
Epoch [ 1010/ 4000] | d_X_loss: 0.5481 | d_Y_loss: 0.5091 | g_total_loss: 2.4786
Epoch [ 1020/ 4000] | d_X_loss: 0.4624 | d_Y_loss: 0.4778 | g_total_loss: 2.8193
Epoch [ 1030/ 4000] | d_X_loss: 0.4808 | d_Y_loss: 0.5183 | g_total_loss: 2.5259
Epoch [ 1040/ 4000] | d_X_loss: 0.4446 | d_Y_loss: 0.4502 | g_total_loss: 2.6504
Epoch [ 1050/ 4000] | d_X_loss: 0.5523 | d_Y_loss: 0.3678 | g_total_loss: 2.7446
Epoch [ 1060/ 4000] | d_X_loss: 0.4931 | d_Y_loss: 0.5436 | g_total_loss: 2.4886
Epoch [ 1070/ 4000] | d_X_loss: 0.4594 | d_Y_loss: 0.5820 | g_total_loss: 2.5392
Epoch [ 1080/ 4000] | d_X_loss: 0.4712 | d_Y_loss: 0.4967 | g_total_loss: 2.8293
Epoch [ 1090/ 4000] | d_X_loss: 0.4470 | d_Y_loss: 0.5671 | g_total_loss: 2.5763
Epoch [ 1100/ 4000] | d_X_loss: 0.4531 | d_Y_loss: 0.4713 | g_total_loss: 2.7818
Saved samples_cyclegan/sample-001100-X-Y.png
Saved samples_cyclegan/sample-001100-Y-X.png
Epoch [ 1110/ 4000] | d_X_loss: 0.4713 | d_Y_loss: 0.4948 | g_total_loss: 2.7047
Epoch [ 1120/ 4000] | d_X_loss: 0.5450 | d_Y_loss: 0.4886 | g_total_loss: 2.8280
Epoch [ 1130/ 4000] | d_X_loss: 0.4356 | d_Y_loss: 0.4396 | g_total_loss: 2.8548
Epoch [ 1140/ 4000] | d_X_loss: 0.4082 | d_Y_loss: 0.5245 | g_total_loss: 2.5746
Epoch [ 1150/ 4000] | d_X_loss: 0.4376 | d_Y_loss: 0.4824 | g_total_loss: 2.7255
Epoch [ 1160/ 4000] | d_X_loss: 0.4832 | d_Y_loss: 0.5102 | g_total_loss: 2.8599
Epoch [ 1170/ 4000] | d_X_loss: 0.4253 | d_Y_loss: 0.4626 | g_total_loss: 2.4749
Epoch [ 1180/ 4000] | d_X_loss: 0.4202 | d_Y_loss: 0.5697 | g_total_loss: 2.6633
Epoch [ 1190/ 4000] | d_X_loss: 0.4090 | d_Y_loss: 0.4832 | g_total_loss: 2.8442
Epoch [ 1200/ 4000] | d_X_loss: 0.5805 | d_Y_loss: 0.4563 | g_total_loss: 2.7518
Saved samples_cyclegan/sample-001200-X-Y.png
Saved samples_cyclegan/sample-001200-Y-X.png
Epoch [ 1210/ 4000] | d_X_loss: 0.4191 | d_Y_loss: 0.4798 | g_total_loss: 2.4212
Epoch [ 1220/ 4000] | d_X_loss: 0.4195 | d_Y_loss: 0.4640 | g_total_loss: 2.8908
Epoch [ 1230/ 4000] | d_X_loss: 0.4445 | d_Y_loss: 0.5103 | g_total_loss: 2.5451
Epoch [ 1240/ 4000] | d_X_loss: 0.5276 | d_Y_loss: 0.4663 | g_total_loss: 2.6922
Epoch [ 1250/ 4000] | d_X_loss: 0.4723 | d_Y_loss: 0.3810 | g_total_loss: 2.7280
Epoch [ 1260/ 4000] | d_X_loss: 0.4425 | d_Y_loss: 0.4911 | g_total_loss: 2.6163
Epoch [ 1270/ 4000] | d_X_loss: 0.4797 | d_Y_loss: 0.4620 | g_total_loss: 2.8075
Epoch [ 1280/ 4000] | d_X_loss: 0.3600 | d_Y_loss: 0.4651 | g_total_loss: 2.5531
Epoch [ 1290/ 4000] | d_X_loss: 0.5419 | d_Y_loss: 0.6069 | g_total_loss: 2.8494
Epoch [ 1300/ 4000] | d_X_loss: 0.4665 | d_Y_loss: 0.4815 | g_total_loss: 2.3871
Saved samples_cyclegan/sample-001300-X-Y.png
Saved samples_cyclegan/sample-001300-Y-X.png
Epoch [ 1310/ 4000] | d_X_loss: 0.4223 | d_Y_loss: 0.4257 | g_total_loss: 2.5053
Epoch [ 1320/ 4000] | d_X_loss: 0.4849 | d_Y_loss: 0.4174 | g_total_loss: 2.5344
Epoch [ 1330/ 4000] | d_X_loss: 0.4024 | d_Y_loss: 0.4269 | g_total_loss: 2.8531
Epoch [ 1340/ 4000] | d_X_loss: 0.3483 | d_Y_loss: 0.7633 | g_total_loss: 2.9865
Epoch [ 1350/ 4000] | d_X_loss: 0.4352 | d_Y_loss: 0.4428 | g_total_loss: 2.7220
Epoch [ 1360/ 4000] | d_X_loss: 0.3746 | d_Y_loss: 0.4425 | g_total_loss: 2.5851
Epoch [ 1370/ 4000] | d_X_loss: 1.2532 | d_Y_loss: 0.4581 | g_total_loss: 3.8178
Epoch [ 1380/ 4000] | d_X_loss: 0.5454 | d_Y_loss: 0.4693 | g_total_loss: 2.6184
Epoch [ 1390/ 4000] | d_X_loss: 0.5326 | d_Y_loss: 0.8876 | g_total_loss: 2.4895
Epoch [ 1400/ 4000] | d_X_loss: 0.5245 | d_Y_loss: 0.4320 | g_total_loss: 2.2891
Saved samples_cyclegan/sample-001400-X-Y.png
Saved samples_cyclegan/sample-001400-Y-X.png
Epoch [ 1410/ 4000] | d_X_loss: 0.4708 | d_Y_loss: 0.4297 | g_total_loss: 2.5800
Epoch [ 1420/ 4000] | d_X_loss: 0.4381 | d_Y_loss: 0.4615 | g_total_loss: 2.8413
Epoch [ 1430/ 4000] | d_X_loss: 0.4852 | d_Y_loss: 0.5401 | g_total_loss: 2.2718
Epoch [ 1440/ 4000] | d_X_loss: 0.4575 | d_Y_loss: 0.3806 | g_total_loss: 2.3545
Epoch [ 1450/ 4000] | d_X_loss: 0.4696 | d_Y_loss: 0.4214 | g_total_loss: 2.3857
Epoch [ 1460/ 4000] | d_X_loss: 0.5449 | d_Y_loss: 0.5552 | g_total_loss: 2.5743
Epoch [ 1470/ 4000] | d_X_loss: 0.4342 | d_Y_loss: 0.3104 | g_total_loss: 2.5492
Epoch [ 1480/ 4000] | d_X_loss: 0.4529 | d_Y_loss: 0.3468 | g_total_loss: 2.6202
Epoch [ 1490/ 4000] | d_X_loss: 0.5327 | d_Y_loss: 0.6061 | g_total_loss: 2.7087
Epoch [ 1500/ 4000] | d_X_loss: 0.3839 | d_Y_loss: 0.5866 | g_total_loss: 2.6752
Saved samples_cyclegan/sample-001500-X-Y.png
Saved samples_cyclegan/sample-001500-Y-X.png
Epoch [ 1510/ 4000] | d_X_loss: 0.4825 | d_Y_loss: 0.3880 | g_total_loss: 2.9228
Epoch [ 1520/ 4000] | d_X_loss: 0.4944 | d_Y_loss: 0.3747 | g_total_loss: 2.8951
Epoch [ 1530/ 4000] | d_X_loss: 0.4130 | d_Y_loss: 0.3698 | g_total_loss: 2.6747
Epoch [ 1540/ 4000] | d_X_loss: 0.4015 | d_Y_loss: 0.4480 | g_total_loss: 2.7064
Epoch [ 1550/ 4000] | d_X_loss: 0.4453 | d_Y_loss: 0.5058 | g_total_loss: 2.6988
Epoch [ 1560/ 4000] | d_X_loss: 0.4157 | d_Y_loss: 0.6358 | g_total_loss: 2.8921
Epoch [ 1570/ 4000] | d_X_loss: 0.4579 | d_Y_loss: 0.3402 | g_total_loss: 2.8402
Epoch [ 1580/ 4000] | d_X_loss: 0.4220 | d_Y_loss: 0.4056 | g_total_loss: 2.5986
Epoch [ 1590/ 4000] | d_X_loss: 0.5382 | d_Y_loss: 0.4225 | g_total_loss: 2.9981
Epoch [ 1600/ 4000] | d_X_loss: 0.4526 | d_Y_loss: 0.6430 | g_total_loss: 2.7364
Saved samples_cyclegan/sample-001600-X-Y.png
Saved samples_cyclegan/sample-001600-Y-X.png
Epoch [ 1610/ 4000] | d_X_loss: 0.3790 | d_Y_loss: 0.4188 | g_total_loss: 2.7875
Epoch [ 1620/ 4000] | d_X_loss: 0.4640 | d_Y_loss: 0.3000 | g_total_loss: 3.1804
Epoch [ 1630/ 4000] | d_X_loss: 0.3482 | d_Y_loss: 0.3042 | g_total_loss: 3.2553
Epoch [ 1640/ 4000] | d_X_loss: 0.5022 | d_Y_loss: 0.5370 | g_total_loss: 2.9548
Epoch [ 1650/ 4000] | d_X_loss: 0.4002 | d_Y_loss: 0.2920 | g_total_loss: 3.1939
Epoch [ 1660/ 4000] | d_X_loss: 0.5286 | d_Y_loss: 0.4776 | g_total_loss: 2.7042
Epoch [ 1670/ 4000] | d_X_loss: 0.4266 | d_Y_loss: 0.6142 | g_total_loss: 2.7741
Epoch [ 1680/ 4000] | d_X_loss: 1.2670 | d_Y_loss: 0.3006 | g_total_loss: 3.0362
Epoch [ 1690/ 4000] | d_X_loss: 0.4398 | d_Y_loss: 0.3384 | g_total_loss: 3.2420
Epoch [ 1700/ 4000] | d_X_loss: 0.4399 | d_Y_loss: 0.5288 | g_total_loss: 2.7066
Saved samples_cyclegan/sample-001700-X-Y.png
Saved samples_cyclegan/sample-001700-Y-X.png
Epoch [ 1710/ 4000] | d_X_loss: 0.4747 | d_Y_loss: 0.6424 | g_total_loss: 2.6853
Epoch [ 1720/ 4000] | d_X_loss: 0.3943 | d_Y_loss: 0.4755 | g_total_loss: 2.7453
Epoch [ 1730/ 4000] | d_X_loss: 0.4473 | d_Y_loss: 0.3378 | g_total_loss: 2.6685
Epoch [ 1740/ 4000] | d_X_loss: 0.4909 | d_Y_loss: 0.5379 | g_total_loss: 2.8059
Epoch [ 1750/ 4000] | d_X_loss: 0.4825 | d_Y_loss: 0.2517 | g_total_loss: 2.9962
Epoch [ 1760/ 4000] | d_X_loss: 0.4571 | d_Y_loss: 0.4254 | g_total_loss: 3.1124
Epoch [ 1770/ 4000] | d_X_loss: 0.9837 | d_Y_loss: 0.6898 | g_total_loss: 3.2099
Epoch [ 1780/ 4000] | d_X_loss: 0.4164 | d_Y_loss: 0.4601 | g_total_loss: 2.7536
Epoch [ 1790/ 4000] | d_X_loss: 0.4120 | d_Y_loss: 0.6774 | g_total_loss: 2.7896
Epoch [ 1800/ 4000] | d_X_loss: 0.5306 | d_Y_loss: 0.3373 | g_total_loss: 2.7480
Saved samples_cyclegan/sample-001800-X-Y.png
Saved samples_cyclegan/sample-001800-Y-X.png
Epoch [ 1810/ 4000] | d_X_loss: 0.3708 | d_Y_loss: 0.5195 | g_total_loss: 2.7023
Epoch [ 1820/ 4000] | d_X_loss: 0.5078 | d_Y_loss: 0.2477 | g_total_loss: 3.1342
Epoch [ 1830/ 4000] | d_X_loss: 0.5339 | d_Y_loss: 0.5729 | g_total_loss: 2.9380
Epoch [ 1840/ 4000] | d_X_loss: 0.4702 | d_Y_loss: 0.6039 | g_total_loss: 3.2028
Epoch [ 1850/ 4000] | d_X_loss: 0.3562 | d_Y_loss: 0.4149 | g_total_loss: 3.0261
Epoch [ 1860/ 4000] | d_X_loss: 0.5916 | d_Y_loss: 0.5299 | g_total_loss: 2.6643
Epoch [ 1870/ 4000] | d_X_loss: 0.4165 | d_Y_loss: 0.3755 | g_total_loss: 2.9700
Epoch [ 1880/ 4000] | d_X_loss: 0.4995 | d_Y_loss: 0.4550 | g_total_loss: 2.8127
Epoch [ 1890/ 4000] | d_X_loss: 0.4083 | d_Y_loss: 0.4393 | g_total_loss: 3.4018
Epoch [ 1900/ 4000] | d_X_loss: 0.4317 | d_Y_loss: 0.4139 | g_total_loss: 3.1044
Saved samples_cyclegan/sample-001900-X-Y.png
Saved samples_cyclegan/sample-001900-Y-X.png
Epoch [ 1910/ 4000] | d_X_loss: 0.3777 | d_Y_loss: 0.3701 | g_total_loss: 3.0509
Epoch [ 1920/ 4000] | d_X_loss: 0.3595 | d_Y_loss: 0.4652 | g_total_loss: 2.9341
Epoch [ 1930/ 4000] | d_X_loss: 0.4846 | d_Y_loss: 0.3873 | g_total_loss: 2.9688
Epoch [ 1940/ 4000] | d_X_loss: 0.4083 | d_Y_loss: 0.6689 | g_total_loss: 2.8925
Epoch [ 1950/ 4000] | d_X_loss: 0.4099 | d_Y_loss: 0.3401 | g_total_loss: 2.9658
Epoch [ 1960/ 4000] | d_X_loss: 0.4390 | d_Y_loss: 0.5214 | g_total_loss: 3.0040
Epoch [ 1970/ 4000] | d_X_loss: 0.5049 | d_Y_loss: 0.3828 | g_total_loss: 3.2536
Epoch [ 1980/ 4000] | d_X_loss: 0.3931 | d_Y_loss: 0.4097 | g_total_loss: 2.9936
Epoch [ 1990/ 4000] | d_X_loss: 0.5092 | d_Y_loss: 0.4221 | g_total_loss: 2.8941
Epoch [ 2000/ 4000] | d_X_loss: 0.5953 | d_Y_loss: 0.4489 | g_total_loss: 3.0401
Saved samples_cyclegan/sample-002000-X-Y.png
Saved samples_cyclegan/sample-002000-Y-X.png
Epoch [ 2010/ 4000] | d_X_loss: 0.4420 | d_Y_loss: 0.5735 | g_total_loss: 2.6274
Epoch [ 2020/ 4000] | d_X_loss: 0.3977 | d_Y_loss: 0.3677 | g_total_loss: 3.3244
Epoch [ 2030/ 4000] | d_X_loss: 0.4744 | d_Y_loss: 0.5702 | g_total_loss: 2.8557
Epoch [ 2040/ 4000] | d_X_loss: 0.3377 | d_Y_loss: 0.4381 | g_total_loss: 3.0222
Epoch [ 2050/ 4000] | d_X_loss: 0.4671 | d_Y_loss: 0.4621 | g_total_loss: 2.8481
Epoch [ 2060/ 4000] | d_X_loss: 0.4797 | d_Y_loss: 0.4395 | g_total_loss: 3.2311
Epoch [ 2070/ 4000] | d_X_loss: 0.3724 | d_Y_loss: 0.2736 | g_total_loss: 3.1898
Epoch [ 2080/ 4000] | d_X_loss: 0.4087 | d_Y_loss: 0.4920 | g_total_loss: 2.8927
Epoch [ 2090/ 4000] | d_X_loss: 0.4719 | d_Y_loss: 0.4349 | g_total_loss: 3.6409
Epoch [ 2100/ 4000] | d_X_loss: 0.4268 | d_Y_loss: 0.4480 | g_total_loss: 2.7113
Saved samples_cyclegan/sample-002100-X-Y.png
Saved samples_cyclegan/sample-002100-Y-X.png
Epoch [ 2110/ 4000] | d_X_loss: 0.4158 | d_Y_loss: 0.4218 | g_total_loss: 2.7866
Epoch [ 2120/ 4000] | d_X_loss: 0.3987 | d_Y_loss: 0.4481 | g_total_loss: 2.8861
Epoch [ 2130/ 4000] | d_X_loss: 0.3297 | d_Y_loss: 0.3934 | g_total_loss: 2.9668
Epoch [ 2140/ 4000] | d_X_loss: 0.4514 | d_Y_loss: 0.3387 | g_total_loss: 3.1931
Epoch [ 2150/ 4000] | d_X_loss: 0.2930 | d_Y_loss: 0.3799 | g_total_loss: 2.7643
Epoch [ 2160/ 4000] | d_X_loss: 0.6966 | d_Y_loss: 0.3626 | g_total_loss: 3.2831
Epoch [ 2170/ 4000] | d_X_loss: 0.3183 | d_Y_loss: 0.5030 | g_total_loss: 3.5443
Epoch [ 2180/ 4000] | d_X_loss: 0.4366 | d_Y_loss: 0.3573 | g_total_loss: 3.1751
Epoch [ 2190/ 4000] | d_X_loss: 0.8812 | d_Y_loss: 0.4269 | g_total_loss: 3.0135
Epoch [ 2200/ 4000] | d_X_loss: 0.3447 | d_Y_loss: 0.3524 | g_total_loss: 2.6015
Saved samples_cyclegan/sample-002200-X-Y.png
Saved samples_cyclegan/sample-002200-Y-X.png
Epoch [ 2210/ 4000] | d_X_loss: 0.5471 | d_Y_loss: 0.4251 | g_total_loss: 2.9597
Epoch [ 2220/ 4000] | d_X_loss: 0.5199 | d_Y_loss: 0.4901 | g_total_loss: 2.8360
Epoch [ 2230/ 4000] | d_X_loss: 0.4325 | d_Y_loss: 0.3994 | g_total_loss: 2.8864
Epoch [ 2240/ 4000] | d_X_loss: 0.5122 | d_Y_loss: 0.3045 | g_total_loss: 3.0330
Epoch [ 2250/ 4000] | d_X_loss: 0.3856 | d_Y_loss: 0.5850 | g_total_loss: 3.0294
Epoch [ 2260/ 4000] | d_X_loss: 0.5235 | d_Y_loss: 0.4058 | g_total_loss: 3.1109
Epoch [ 2270/ 4000] | d_X_loss: 0.4409 | d_Y_loss: 0.5079 | g_total_loss: 2.7932
Epoch [ 2280/ 4000] | d_X_loss: 0.4401 | d_Y_loss: 0.3874 | g_total_loss: 3.0974
Epoch [ 2290/ 4000] | d_X_loss: 0.5111 | d_Y_loss: 0.3884 | g_total_loss: 2.9679
Epoch [ 2300/ 4000] | d_X_loss: 0.3351 | d_Y_loss: 0.4479 | g_total_loss: 2.8647
Saved samples_cyclegan/sample-002300-X-Y.png
Saved samples_cyclegan/sample-002300-Y-X.png
Epoch [ 2310/ 4000] | d_X_loss: 0.4549 | d_Y_loss: 0.4192 | g_total_loss: 2.7872
Epoch [ 2320/ 4000] | d_X_loss: 0.3713 | d_Y_loss: 0.3434 | g_total_loss: 2.6710
Epoch [ 2330/ 4000] | d_X_loss: 0.7035 | d_Y_loss: 0.5046 | g_total_loss: 2.9805
Epoch [ 2340/ 4000] | d_X_loss: 0.4369 | d_Y_loss: 0.3057 | g_total_loss: 2.7617
Epoch [ 2350/ 4000] | d_X_loss: 0.3857 | d_Y_loss: 0.4552 | g_total_loss: 2.4578
Epoch [ 2360/ 4000] | d_X_loss: 0.4322 | d_Y_loss: 0.6199 | g_total_loss: 2.8909
Epoch [ 2370/ 4000] | d_X_loss: 0.4090 | d_Y_loss: 0.4242 | g_total_loss: 2.5506
Epoch [ 2380/ 4000] | d_X_loss: 0.2787 | d_Y_loss: 0.4878 | g_total_loss: 3.0147
Epoch [ 2390/ 4000] | d_X_loss: 0.3787 | d_Y_loss: 0.4232 | g_total_loss: 2.8002
Epoch [ 2400/ 4000] | d_X_loss: 0.3396 | d_Y_loss: 0.4175 | g_total_loss: 2.5753
Saved samples_cyclegan/sample-002400-X-Y.png
Saved samples_cyclegan/sample-002400-Y-X.png
Epoch [ 2410/ 4000] | d_X_loss: 0.3782 | d_Y_loss: 0.3215 | g_total_loss: 2.9556
Epoch [ 2420/ 4000] | d_X_loss: 0.3686 | d_Y_loss: 0.3306 | g_total_loss: 3.2482
Epoch [ 2430/ 4000] | d_X_loss: 0.2396 | d_Y_loss: 0.4103 | g_total_loss: 3.6417
Epoch [ 2440/ 4000] | d_X_loss: 0.3607 | d_Y_loss: 0.4567 | g_total_loss: 3.1626
Epoch [ 2450/ 4000] | d_X_loss: 0.3355 | d_Y_loss: 0.5401 | g_total_loss: 3.2728
Epoch [ 2460/ 4000] | d_X_loss: 0.4476 | d_Y_loss: 0.3126 | g_total_loss: 3.2167
Epoch [ 2470/ 4000] | d_X_loss: 0.5054 | d_Y_loss: 0.5360 | g_total_loss: 2.6700
Epoch [ 2480/ 4000] | d_X_loss: 0.3147 | d_Y_loss: 0.5361 | g_total_loss: 2.9677
Epoch [ 2490/ 4000] | d_X_loss: 0.6479 | d_Y_loss: 0.5557 | g_total_loss: 2.3601
Epoch [ 2500/ 4000] | d_X_loss: 0.3641 | d_Y_loss: 0.5284 | g_total_loss: 2.7020
Saved samples_cyclegan/sample-002500-X-Y.png
Saved samples_cyclegan/sample-002500-Y-X.png
Epoch [ 2510/ 4000] | d_X_loss: 0.4258 | d_Y_loss: 0.4684 | g_total_loss: 2.7962
Epoch [ 2520/ 4000] | d_X_loss: 0.4774 | d_Y_loss: 0.4305 | g_total_loss: 2.8671
Epoch [ 2530/ 4000] | d_X_loss: 0.4437 | d_Y_loss: 0.4350 | g_total_loss: 2.8769
Epoch [ 2540/ 4000] | d_X_loss: 0.4448 | d_Y_loss: 0.4074 | g_total_loss: 2.7471
Epoch [ 2550/ 4000] | d_X_loss: 0.4235 | d_Y_loss: 0.5528 | g_total_loss: 2.6980
Epoch [ 2560/ 4000] | d_X_loss: 0.4737 | d_Y_loss: 0.4217 | g_total_loss: 2.5110
Epoch [ 2570/ 4000] | d_X_loss: 0.4236 | d_Y_loss: 0.3628 | g_total_loss: 2.8205
Epoch [ 2580/ 4000] | d_X_loss: 0.3956 | d_Y_loss: 0.4419 | g_total_loss: 2.4431
Epoch [ 2590/ 4000] | d_X_loss: 0.4381 | d_Y_loss: 0.4321 | g_total_loss: 2.6094
Epoch [ 2600/ 4000] | d_X_loss: 0.4761 | d_Y_loss: 0.3832 | g_total_loss: 3.0222
Saved samples_cyclegan/sample-002600-X-Y.png
Saved samples_cyclegan/sample-002600-Y-X.png
Epoch [ 2610/ 4000] | d_X_loss: 0.4803 | d_Y_loss: 0.4223 | g_total_loss: 2.8462
Epoch [ 2620/ 4000] | d_X_loss: 0.3364 | d_Y_loss: 0.4396 | g_total_loss: 2.9018
Epoch [ 2630/ 4000] | d_X_loss: 0.7944 | d_Y_loss: 0.3810 | g_total_loss: 2.7624
Epoch [ 2640/ 4000] | d_X_loss: 0.4956 | d_Y_loss: 0.4799 | g_total_loss: 2.6936
Epoch [ 2650/ 4000] | d_X_loss: 0.4700 | d_Y_loss: 0.4020 | g_total_loss: 2.7518
Epoch [ 2660/ 4000] | d_X_loss: 0.4053 | d_Y_loss: 0.4492 | g_total_loss: 3.0105
Epoch [ 2670/ 4000] | d_X_loss: 0.3992 | d_Y_loss: 0.4907 | g_total_loss: 2.7262
Epoch [ 2680/ 4000] | d_X_loss: 0.4939 | d_Y_loss: 0.4479 | g_total_loss: 2.4854
Epoch [ 2690/ 4000] | d_X_loss: 0.4705 | d_Y_loss: 0.5157 | g_total_loss: 2.6453
Epoch [ 2700/ 4000] | d_X_loss: 0.5949 | d_Y_loss: 0.4622 | g_total_loss: 2.8700
Saved samples_cyclegan/sample-002700-X-Y.png
Saved samples_cyclegan/sample-002700-Y-X.png
Epoch [ 2710/ 4000] | d_X_loss: 0.4654 | d_Y_loss: 0.4260 | g_total_loss: 2.9271
Epoch [ 2720/ 4000] | d_X_loss: 0.3575 | d_Y_loss: 0.4173 | g_total_loss: 2.9397
Epoch [ 2730/ 4000] | d_X_loss: 0.3873 | d_Y_loss: 0.3464 | g_total_loss: 2.8433
Epoch [ 2740/ 4000] | d_X_loss: 0.4078 | d_Y_loss: 0.6363 | g_total_loss: 2.7920
Epoch [ 2750/ 4000] | d_X_loss: 0.2824 | d_Y_loss: 0.5021 | g_total_loss: 2.7478
Epoch [ 2760/ 4000] | d_X_loss: 0.3610 | d_Y_loss: 0.5487 | g_total_loss: 2.7631
Epoch [ 2770/ 4000] | d_X_loss: 0.7171 | d_Y_loss: 0.3893 | g_total_loss: 3.2762
Epoch [ 2780/ 4000] | d_X_loss: 0.3960 | d_Y_loss: 0.3173 | g_total_loss: 3.2914
Epoch [ 2790/ 4000] | d_X_loss: 0.2509 | d_Y_loss: 0.3662 | g_total_loss: 2.8230
Epoch [ 2800/ 4000] | d_X_loss: 0.4608 | d_Y_loss: 0.4489 | g_total_loss: 2.9325
Saved samples_cyclegan/sample-002800-X-Y.png
Saved samples_cyclegan/sample-002800-Y-X.png
Epoch [ 2810/ 4000] | d_X_loss: 0.4299 | d_Y_loss: 0.3862 | g_total_loss: 2.9665
Epoch [ 2820/ 4000] | d_X_loss: 0.3528 | d_Y_loss: 0.2647 | g_total_loss: 3.4732
Epoch [ 2830/ 4000] | d_X_loss: 0.3831 | d_Y_loss: 1.4273 | g_total_loss: 2.8212
Epoch [ 2840/ 4000] | d_X_loss: 0.4080 | d_Y_loss: 0.4889 | g_total_loss: 2.4525
Epoch [ 2850/ 4000] | d_X_loss: 0.3468 | d_Y_loss: 0.4809 | g_total_loss: 2.6359
Epoch [ 2860/ 4000] | d_X_loss: 0.5073 | d_Y_loss: 0.4757 | g_total_loss: 2.3832
Epoch [ 2870/ 4000] | d_X_loss: 0.3514 | d_Y_loss: 0.4613 | g_total_loss: 2.6614
Epoch [ 2880/ 4000] | d_X_loss: 3.0083 | d_Y_loss: 0.4316 | g_total_loss: 4.2691
Epoch [ 2890/ 4000] | d_X_loss: 0.5130 | d_Y_loss: 0.4470 | g_total_loss: 2.4507
Epoch [ 2900/ 4000] | d_X_loss: 0.4958 | d_Y_loss: 0.4425 | g_total_loss: 2.5257
Saved samples_cyclegan/sample-002900-X-Y.png
Saved samples_cyclegan/sample-002900-Y-X.png
Epoch [ 2910/ 4000] | d_X_loss: 0.4979 | d_Y_loss: 0.4195 | g_total_loss: 2.3289
Epoch [ 2920/ 4000] | d_X_loss: 0.5287 | d_Y_loss: 0.4718 | g_total_loss: 2.5465
Epoch [ 2930/ 4000] | d_X_loss: 0.4949 | d_Y_loss: 0.4524 | g_total_loss: 2.5333
Epoch [ 2940/ 4000] | d_X_loss: 0.5107 | d_Y_loss: 0.3674 | g_total_loss: 2.7860
Epoch [ 2950/ 4000] | d_X_loss: 0.5419 | d_Y_loss: 0.4556 | g_total_loss: 2.9978
Epoch [ 2960/ 4000] | d_X_loss: 0.5118 | d_Y_loss: 0.4062 | g_total_loss: 2.5552
Epoch [ 2970/ 4000] | d_X_loss: 0.4917 | d_Y_loss: 0.5664 | g_total_loss: 2.2393
Epoch [ 2980/ 4000] | d_X_loss: 0.5257 | d_Y_loss: 0.3907 | g_total_loss: 2.6297
Epoch [ 2990/ 4000] | d_X_loss: 0.5229 | d_Y_loss: 0.4962 | g_total_loss: 2.6956
Epoch [ 3000/ 4000] | d_X_loss: 0.5179 | d_Y_loss: 0.4967 | g_total_loss: 2.7479
Saved samples_cyclegan/sample-003000-X-Y.png
Saved samples_cyclegan/sample-003000-Y-X.png
Epoch [ 3010/ 4000] | d_X_loss: 0.5087 | d_Y_loss: 0.4346 | g_total_loss: 2.6205
Epoch [ 3020/ 4000] | d_X_loss: 0.5212 | d_Y_loss: 0.4227 | g_total_loss: 2.5839
Epoch [ 3030/ 4000] | d_X_loss: 0.5111 | d_Y_loss: 0.3562 | g_total_loss: 2.8120
Epoch [ 3040/ 4000] | d_X_loss: 0.5067 | d_Y_loss: 0.5086 | g_total_loss: 2.9366
Epoch [ 3050/ 4000] | d_X_loss: 0.5396 | d_Y_loss: 0.4334 | g_total_loss: 2.6637
Epoch [ 3060/ 4000] | d_X_loss: 0.5042 | d_Y_loss: 0.3640 | g_total_loss: 2.7400
Epoch [ 3070/ 4000] | d_X_loss: 0.5030 | d_Y_loss: 0.3519 | g_total_loss: 2.5568
Epoch [ 3080/ 4000] | d_X_loss: 0.5171 | d_Y_loss: 0.6076 | g_total_loss: 2.4610
Epoch [ 3090/ 4000] | d_X_loss: 0.5110 | d_Y_loss: 0.4321 | g_total_loss: 2.6734
Epoch [ 3100/ 4000] | d_X_loss: 0.5009 | d_Y_loss: 0.3801 | g_total_loss: 2.3517
Saved samples_cyclegan/sample-003100-X-Y.png
Saved samples_cyclegan/sample-003100-Y-X.png
Epoch [ 3110/ 4000] | d_X_loss: 0.5238 | d_Y_loss: 0.6341 | g_total_loss: 2.8676
Epoch [ 3120/ 4000] | d_X_loss: 0.5133 | d_Y_loss: 0.3479 | g_total_loss: 2.3770
Epoch [ 3130/ 4000] | d_X_loss: 0.4861 | d_Y_loss: 0.5257 | g_total_loss: 3.0015
Epoch [ 3140/ 4000] | d_X_loss: 0.5174 | d_Y_loss: 0.4591 | g_total_loss: 2.6183
Epoch [ 3150/ 4000] | d_X_loss: 0.4926 | d_Y_loss: 0.3371 | g_total_loss: 2.8798
Epoch [ 3160/ 4000] | d_X_loss: 0.4871 | d_Y_loss: 0.4102 | g_total_loss: 2.8469
Epoch [ 3170/ 4000] | d_X_loss: 0.4944 | d_Y_loss: 0.5460 | g_total_loss: 2.6316
Epoch [ 3180/ 4000] | d_X_loss: 0.4902 | d_Y_loss: 0.3333 | g_total_loss: 2.7585
Epoch [ 3190/ 4000] | d_X_loss: 0.4972 | d_Y_loss: 0.5289 | g_total_loss: 2.6232
Epoch [ 3200/ 4000] | d_X_loss: 0.4883 | d_Y_loss: 0.4453 | g_total_loss: 2.4908
Saved samples_cyclegan/sample-003200-X-Y.png
Saved samples_cyclegan/sample-003200-Y-X.png
Epoch [ 3210/ 4000] | d_X_loss: 0.4786 | d_Y_loss: 0.4272 | g_total_loss: 2.5643
Epoch [ 3220/ 4000] | d_X_loss: 0.5271 | d_Y_loss: 0.3097 | g_total_loss: 2.7490
Epoch [ 3230/ 4000] | d_X_loss: 0.5105 | d_Y_loss: 0.4245 | g_total_loss: 2.6012
Epoch [ 3240/ 4000] | d_X_loss: 0.4844 | d_Y_loss: 0.4146 | g_total_loss: 2.5719
Epoch [ 3250/ 4000] | d_X_loss: 0.5033 | d_Y_loss: 0.4445 | g_total_loss: 2.4317
Epoch [ 3260/ 4000] | d_X_loss: 0.4770 | d_Y_loss: 0.3966 | g_total_loss: 2.9581
Epoch [ 3270/ 4000] | d_X_loss: 0.5016 | d_Y_loss: 0.4366 | g_total_loss: 2.9488
Epoch [ 3280/ 4000] | d_X_loss: 0.4914 | d_Y_loss: 0.3271 | g_total_loss: 2.6550
Epoch [ 3290/ 4000] | d_X_loss: 0.5036 | d_Y_loss: 0.3680 | g_total_loss: 3.0544
Epoch [ 3300/ 4000] | d_X_loss: 0.5431 | d_Y_loss: 0.5260 | g_total_loss: 2.3854
Saved samples_cyclegan/sample-003300-X-Y.png
Saved samples_cyclegan/sample-003300-Y-X.png
Epoch [ 3310/ 4000] | d_X_loss: 0.5002 | d_Y_loss: 0.3655 | g_total_loss: 2.3931
Epoch [ 3320/ 4000] | d_X_loss: 0.4901 | d_Y_loss: 0.3696 | g_total_loss: 2.4796
Epoch [ 3330/ 4000] | d_X_loss: 0.4929 | d_Y_loss: 0.4095 | g_total_loss: 2.6706
Epoch [ 3340/ 4000] | d_X_loss: 0.4785 | d_Y_loss: 0.3974 | g_total_loss: 2.2713
Epoch [ 3350/ 4000] | d_X_loss: 0.4773 | d_Y_loss: 0.2791 | g_total_loss: 2.5656
Epoch [ 3360/ 4000] | d_X_loss: 0.5161 | d_Y_loss: 0.3836 | g_total_loss: 2.4197
Epoch [ 3370/ 4000] | d_X_loss: 0.5098 | d_Y_loss: 0.4982 | g_total_loss: 3.0666
Epoch [ 3380/ 4000] | d_X_loss: 0.4840 | d_Y_loss: 0.4699 | g_total_loss: 2.2464
Epoch [ 3390/ 4000] | d_X_loss: 0.4937 | d_Y_loss: 0.4518 | g_total_loss: 2.3082
Epoch [ 3400/ 4000] | d_X_loss: 0.4835 | d_Y_loss: 0.4502 | g_total_loss: 2.2196
Saved samples_cyclegan/sample-003400-X-Y.png
Saved samples_cyclegan/sample-003400-Y-X.png
Epoch [ 3410/ 4000] | d_X_loss: 0.4876 | d_Y_loss: 0.4637 | g_total_loss: 2.2279
Epoch [ 3420/ 4000] | d_X_loss: 0.4983 | d_Y_loss: 0.4515 | g_total_loss: 2.4763
Epoch [ 3430/ 4000] | d_X_loss: 0.4660 | d_Y_loss: 0.4594 | g_total_loss: 2.1173
Epoch [ 3440/ 4000] | d_X_loss: 0.4972 | d_Y_loss: 0.4684 | g_total_loss: 2.4591
Epoch [ 3450/ 4000] | d_X_loss: 0.4709 | d_Y_loss: 0.5166 | g_total_loss: 2.5004
Epoch [ 3460/ 4000] | d_X_loss: 0.5463 | d_Y_loss: 0.5400 | g_total_loss: 2.1963
Epoch [ 3470/ 4000] | d_X_loss: 0.4881 | d_Y_loss: 0.4084 | g_total_loss: 2.6874
Epoch [ 3480/ 4000] | d_X_loss: 0.4948 | d_Y_loss: 0.4316 | g_total_loss: 2.4777
Epoch [ 3490/ 4000] | d_X_loss: 0.4694 | d_Y_loss: 0.3884 | g_total_loss: 2.4642
Epoch [ 3500/ 4000] | d_X_loss: 0.4883 | d_Y_loss: 0.4771 | g_total_loss: 2.4564
Saved samples_cyclegan/sample-003500-X-Y.png
Saved samples_cyclegan/sample-003500-Y-X.png
Epoch [ 3510/ 4000] | d_X_loss: 0.5047 | d_Y_loss: 0.3901 | g_total_loss: 2.4878
Epoch [ 3520/ 4000] | d_X_loss: 0.4750 | d_Y_loss: 0.5720 | g_total_loss: 2.5298
Epoch [ 3530/ 4000] | d_X_loss: 0.4823 | d_Y_loss: 0.4600 | g_total_loss: 2.3007
Epoch [ 3540/ 4000] | d_X_loss: 0.4812 | d_Y_loss: 0.3817 | g_total_loss: 2.2538
Epoch [ 3550/ 4000] | d_X_loss: 0.4704 | d_Y_loss: 0.4019 | g_total_loss: 2.4060
Epoch [ 3560/ 4000] | d_X_loss: 0.4960 | d_Y_loss: 0.4386 | g_total_loss: 2.3538
Epoch [ 3570/ 4000] | d_X_loss: 0.4947 | d_Y_loss: 0.4206 | g_total_loss: 2.3749
Epoch [ 3580/ 4000] | d_X_loss: 0.4774 | d_Y_loss: 0.4561 | g_total_loss: 2.5811
Epoch [ 3590/ 4000] | d_X_loss: 0.5662 | d_Y_loss: 0.4500 | g_total_loss: 2.4481
Epoch [ 3600/ 4000] | d_X_loss: 0.4609 | d_Y_loss: 0.4706 | g_total_loss: 2.5283
Saved samples_cyclegan/sample-003600-X-Y.png
Saved samples_cyclegan/sample-003600-Y-X.png
Epoch [ 3610/ 4000] | d_X_loss: 0.4701 | d_Y_loss: 0.4385 | g_total_loss: 2.2440
Epoch [ 3620/ 4000] | d_X_loss: 0.4620 | d_Y_loss: 0.5137 | g_total_loss: 2.3411
Epoch [ 3630/ 4000] | d_X_loss: 0.5012 | d_Y_loss: 0.4016 | g_total_loss: 2.5949
Epoch [ 3640/ 4000] | d_X_loss: 0.4544 | d_Y_loss: 0.6884 | g_total_loss: 2.7748
Epoch [ 3650/ 4000] | d_X_loss: 0.5030 | d_Y_loss: 0.4884 | g_total_loss: 2.2566
Epoch [ 3660/ 4000] | d_X_loss: 0.5077 | d_Y_loss: 0.3717 | g_total_loss: 2.8597
Epoch [ 3670/ 4000] | d_X_loss: 0.4671 | d_Y_loss: 0.4055 | g_total_loss: 2.3179
Epoch [ 3680/ 4000] | d_X_loss: 0.5158 | d_Y_loss: 0.4422 | g_total_loss: 2.5458
Epoch [ 3690/ 4000] | d_X_loss: 0.4617 | d_Y_loss: 0.4774 | g_total_loss: 2.5249
Epoch [ 3700/ 4000] | d_X_loss: 0.4621 | d_Y_loss: 0.4121 | g_total_loss: 2.4438
Saved samples_cyclegan/sample-003700-X-Y.png
Saved samples_cyclegan/sample-003700-Y-X.png
Epoch [ 3710/ 4000] | d_X_loss: 0.5275 | d_Y_loss: 0.4446 | g_total_loss: 2.4235
Epoch [ 3720/ 4000] | d_X_loss: 0.5107 | d_Y_loss: 0.3947 | g_total_loss: 2.8609
Epoch [ 3730/ 4000] | d_X_loss: 0.4370 | d_Y_loss: 0.4570 | g_total_loss: 2.4371
Epoch [ 3740/ 4000] | d_X_loss: 0.4406 | d_Y_loss: 0.4588 | g_total_loss: 2.5237
Epoch [ 3750/ 4000] | d_X_loss: 0.6144 | d_Y_loss: 0.5171 | g_total_loss: 2.7802
Epoch [ 3760/ 4000] | d_X_loss: 0.4482 | d_Y_loss: 0.4989 | g_total_loss: 2.4623
Epoch [ 3770/ 4000] | d_X_loss: 0.5489 | d_Y_loss: 0.4193 | g_total_loss: 2.4189
Epoch [ 3780/ 4000] | d_X_loss: 0.5591 | d_Y_loss: 0.4857 | g_total_loss: 2.7536
Epoch [ 3790/ 4000] | d_X_loss: 0.6010 | d_Y_loss: 0.4397 | g_total_loss: 2.6416
Epoch [ 3800/ 4000] | d_X_loss: 0.4263 | d_Y_loss: 0.4687 | g_total_loss: 2.2402
Saved samples_cyclegan/sample-003800-X-Y.png
Saved samples_cyclegan/sample-003800-Y-X.png
Epoch [ 3810/ 4000] | d_X_loss: 0.5106 | d_Y_loss: 0.4076 | g_total_loss: 2.6183
Epoch [ 3820/ 4000] | d_X_loss: 0.4610 | d_Y_loss: 0.4426 | g_total_loss: 2.6008
Epoch [ 3830/ 4000] | d_X_loss: 0.4951 | d_Y_loss: 0.4481 | g_total_loss: 2.4581
Epoch [ 3840/ 4000] | d_X_loss: 0.4634 | d_Y_loss: 0.4234 | g_total_loss: 2.5233
Epoch [ 3850/ 4000] | d_X_loss: 0.4825 | d_Y_loss: 0.5023 | g_total_loss: 2.2000
Epoch [ 3860/ 4000] | d_X_loss: 0.4307 | d_Y_loss: 0.4178 | g_total_loss: 2.6253
Epoch [ 3870/ 4000] | d_X_loss: 0.4996 | d_Y_loss: 0.3790 | g_total_loss: 2.7360
Epoch [ 3880/ 4000] | d_X_loss: 0.4386 | d_Y_loss: 0.4673 | g_total_loss: 2.5566
Epoch [ 3890/ 4000] | d_X_loss: 0.4899 | d_Y_loss: 0.4662 | g_total_loss: 2.6500
Epoch [ 3900/ 4000] | d_X_loss: 0.4842 | d_Y_loss: 0.4719 | g_total_loss: 2.3678
Saved samples_cyclegan/sample-003900-X-Y.png
Saved samples_cyclegan/sample-003900-Y-X.png
Epoch [ 3910/ 4000] | d_X_loss: 0.5406 | d_Y_loss: 0.4946 | g_total_loss: 2.3800
Epoch [ 3920/ 4000] | d_X_loss: 0.4892 | d_Y_loss: 0.4305 | g_total_loss: 2.4429
Epoch [ 3930/ 4000] | d_X_loss: 0.5155 | d_Y_loss: 0.4533 | g_total_loss: 2.4481
Epoch [ 3940/ 4000] | d_X_loss: 0.4451 | d_Y_loss: 0.4216 | g_total_loss: 2.6169
Epoch [ 3950/ 4000] | d_X_loss: 0.4623 | d_Y_loss: 0.4417 | g_total_loss: 2.4625
Epoch [ 3960/ 4000] | d_X_loss: 0.4373 | d_Y_loss: 0.5331 | g_total_loss: 2.9990
Epoch [ 3970/ 4000] | d_X_loss: 0.4781 | d_Y_loss: 0.4527 | g_total_loss: 2.5412
Epoch [ 3980/ 4000] | d_X_loss: 0.4911 | d_Y_loss: 0.4547 | g_total_loss: 2.2217
Epoch [ 3990/ 4000] | d_X_loss: 0.5010 | d_Y_loss: 0.4349 | g_total_loss: 2.3333
Epoch [ 4000/ 4000] | d_X_loss: 0.4675 | d_Y_loss: 0.4143 | g_total_loss: 2.4023
Saved samples_cyclegan/sample-004000-X-Y.png
Saved samples_cyclegan/sample-004000-Y-X.png

Tips on Training and Loss Patterns

A lot of experimentation goes into finding the best hyperparameters such that the generators and discriminators don't overpower each other. It's often a good starting point to look at existing papers to find what has worked in previous experiments, I'd recommend this DCGAN paper in addition to the original CycleGAN paper to see what worked for them. Then, you can try your own experiments based off of a good foundation.

Discriminator Losses

When you display the generator and discriminator losses you should see that there is always some discriminator loss; recall that we are trying to design a model that can generate good "fake" images. So, the ideal discriminator will not be able to tell the difference between real and fake images and, as such, will always have some loss. You should also see that $D_X$ and $D_Y$ are roughly at the same loss levels; if they are not, this indicates that your training is favoring one type of discriminator over the other and you may need to look at biases in your models or data.

Generator Loss

The generator's loss should start significantly higher than the discriminator losses because it is accounting for the loss of both generators and weighted reconstruction errors. You should see this loss decrease a lot at the start of training because initial, generated images are often far-off from being good fakes. After some time it may level off; this is normal since the generator and discriminator are both improving as they train. If you see that the loss is jumping around a lot, over time, you may want to try decreasing your learning rates or changing your cycle consistency loss to be a little more/less weighted.

In [30]:
fig, ax = plt.subplots(figsize=(12,8))
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator, X', alpha=0.5)
plt.plot(losses.T[1], label='Discriminator, Y', alpha=0.5)
plt.plot(losses.T[2], label='Generators', alpha=0.5)
plt.title("Training Losses")
plt.legend()
Out[30]:
<matplotlib.legend.Legend at 0x7f591fbf2990>

Evaluate the Result!

As you trained this model, you may have chosen to sample and save the results of your generated images after a certain number of training iterations. This gives you a way to see whether or not your Generators are creating good fake images. For example, the image below depicts real images in the $Y$ set, and the corresponding generated images during different points in the training process. You can see that the generator starts out creating very noisy, fake images, but begins to converge to better representations as it trains (though, not perfect).

Below, you've been given a helper function for displaying generated samples based on the passed in training iteration.

In [31]:
import matplotlib.image as mpimg

# helper visualization code
def view_samples(iteration, sample_dir='samples_cyclegan'):

    # samples are named by iteration
    path_XtoY = os.path.join(sample_dir, 'sample-{:06d}-X-Y.png'.format(iteration))
    path_YtoX = os.path.join(sample_dir, 'sample-{:06d}-Y-X.png'.format(iteration))

    # read in those samples
    try: 
        x2y = mpimg.imread(path_XtoY)
        y2x = mpimg.imread(path_YtoX)
    except:
        print('Invalid number of iterations.')

    fig, (ax1, ax2) = plt.subplots(figsize=(18,20), nrows=2, ncols=1, sharey=True, sharex=True)
    ax1.imshow(x2y)
    ax1.set_title('X to Y')
    ax2.imshow(y2x)
    ax2.set_title('Y to X')
In [32]:
# view samples at iteration 100
view_samples(100, 'samples_cyclegan')
In [33]:
# view samples at iteration 4000
view_samples(4000, 'samples_cyclegan')

Further Challenges and Directions

  • One shortcoming of this model is that it produces fairly low-resolution images; this is an ongoing area of research; you can read about a higher-resolution formulation that uses a multi-scale generator model, in this paper.
  • Relatedly, we may want to process these as larger (say 256x256) images at first, to take advantage of high-res data.
  • It may help your model to converge faster, if you initialize the weights in your network.
  • This model struggles with matching colors exactly. This is because, if $G_{YtoX}$ and $G_{XtoY}$ may change the tint of an image; the cycle consistency loss may not be affected and can still be small. You could choose to introduce a new, color-based loss term that compares $G_{YtoX}(y)$ and $y$, and $G_{XtoY}(x)$ and $x$, but then this becomes a supervised learning approach.
  • This unsupervised approach also struggles with geometric changes, like changing the apparent size of individual object in an image, so it is best suited for stylistic transformations.
  • For creating different kinds of models or trying out the Pix2Pix Architecture, this Github repository which implements CycleGAN and Pix2Pix in PyTorch is a great resource.

Once you are satified with your model, you are ancouraged to test it on a different dataset to see if it can find different types of mappings!


Different datasets for download

You can download a variety of datasets used in the Pix2Pix and CycleGAN papers, by following instructions in the associated Github repository. You'll just need to make sure that the data directories are named and organized correctly to load in that data.